accession_id
stringlengths 9
11
| pmid
stringlengths 1
8
| introduction
stringlengths 0
134k
| methods
stringlengths 0
208k
| results
stringlengths 0
357k
| discussion
stringlengths 0
357k
| conclusion
stringlengths 0
58.3k
| front
stringlengths 0
30.9k
| body
stringlengths 0
573k
| back
stringlengths 0
126k
| license
stringclasses 4
values | retracted
stringclasses 2
values | last_updated
stringlengths 19
19
| citation
stringlengths 14
94
| package_file
stringlengths 0
35
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PMC6989633 | 31996712 | Introduction
The composition of Australia’s theropod fauna is poorly understood in comparison to those of contemporaneous assemblages around the world, due primarily to the isolated and fragmentary mode of preservation in fossiliferous deposits. To date, the majority of documented theropod remains from Australia are from the ‘mid’-Cretaceous (Albian–Cenomanian) and pertain predominantly to megaraptorids 1 – 7 , an exclusively Gondwanan clade of theropods initially interpreted as a member of Allosauroidea 2 . However, recent hypotheses have suggested alternative positions for megaraptorids within Tyrannosauroidea 8 – 11 or close to the base of Coelurosauria 12 , 13 . Despite the preponderance of megaraptorids in ‘mid’-Cretaceous Australia, a diverse high palaeo-latitude (approximately 60 degrees south) theropod fauna has been hypothesised within the upper Barremian–lower Albian deposits on the south coast of Victoria, including megaraptorans 3 , 5 , 14 , ceratosaurs 15 , spinosaurids 16 , tyrannosauroids 3 , 17 , possible unenlagiine dromaeosaurids and indeterminate maniraptoriforms 3 .
While members of Avetheropoda were undoubtedly present during the Cretaceous of Australia, the evidence for Ceratosauria in Australia is presently very limited, despite their abundance in the diverse Patagonian theropod fossil record 8 . The first suggested Australian ceratosaur came not from the better known Cretaceous sites in eastern Australia, but from the Middle Jurassic Colalura Sandstone of Western Australia. Ozraptor subotaii was described from a distal tibia characterised by a depressed and subdivided facet for the ascending process of the astragalus 18 . Examination of the tibial fragment failed to identify any convincing similarities with any theropod known at the time, and thus Ozraptor was referred to as an indeterminate theropod 17 . Subsequently, the description of abelisauroid remains from the Late Jurassic of Africa included tibiae that also had astragalar articular surfaces similar to that of Ozraptor . On this basis, it was suggested that the Australian tibia represented a member of Abelisauroidea 19 . This interpretation was maintained in a reassessment of a theropod distal tibia from the Middle Jurassic of England 20 , which concluded that a depressed and subdivided facet for the astragalar ascending process was a synapomorphy of Abelisauroidea. However, this character was subsequently recognised in theropods outside of Abelisauroidea and therefore could not be considered as an abelisauroid synapomorphy 21 . As a consequence there was no convincing evidence to support abelisauroid affinities for Ozraptor . The current consensus is that Ozraptor is too incomplete for referral to any theropod clade 21 , 22 .
There has also been suggestion that Kakuru kujani , known from a partial tibia from the Aptian Marree Formation of South Australia 23 pertains to an abelisauroid based on the presence of a vertical median ridge on the distal tibia 24 . For the reasons stated above, this evidence is insufficient for referral of Kakuru to Abelisauroidea; subsequent revisions of this material concluded that Kakuru could only be referred to an indeterminate position within either Averostra or Tetanurae 25 , 26 .
More recently, a left astragalocalcaneum from the upper Barremian–lower Albian San Remo Member of the upper Strzelecki Group on the south coast of Victoria was described (Museum Victoria, Melbourne, Australia; NMV P221202, Fig. 1 ) and referred to Ceratosauria, based among other features on the co-ossification of the astragalus and calcaneum, a parallel-sided base of the ascending process of the astragalus, and a fossa at the base of the ascending process that is not associated with a transverse groove 15 . However, it was subsequently suggested that the evidence for referral of NMV P221202 to Ceratosauria was weak, and that it could only be considered as an indeterminate averostran at best 8 .
Here, we present new evidence for the presence of ceratosaurian theropods from the Cenomanian Griman Creek Formation of Lightning Ridge, New South Wales. We also reappraise the evidence against the ceratosaurian interpretation of the specimen NMV P221202 8 with the objective of clarifying and elucidating its phylogenetic position. | Methods and Materials
LRF 3050.AR and NMV P221202 were inserted into a recently published ceratosaurian phylogenetic matrix 39 (see Supplementary Dataset S1 ) and analysed with equal weights parsimony in TNT 1.5 93 . A driven search strategy was implemented to calculate optimal trees, with each search using 100 replicates of random sectorial searches, each with 30 rounds of drifting, 5 rounds of tree fusing and 50 ratcheting cycles. The analysis was halted after two such successive searches returned shortest trees of the same length. | Discussion
Comparisons of LRF 3050.AR
Opisthocoelous vertebral centra characterise the cervical series of many neotheropods. The posterior surfaces are typically moderately to strongly concave and the anterior surface may be generally flattened 49 , 50 or slightly convex as in ceratosaurians 51 – 54 and basal tetanurans 55 , or form a well-defined projection as in abelisaurids 30 , 56 , megalosauroids 57 – 59 , allosauroids 49 , 60 , 61 , megaraptorids 9 , 62 and alvarezsaurids 63 – 65 . In addition, opisthocoely continues into the anterior dorsal series in megalosauroids 57 , allosauroids 66 , 61 , megaraptorids 62 , and alvarezsaurids 63 . This differs from the condition in Dilophosaurus wetherilli and abelisauroids in which the anterior cervical centra are typically weakly opisthocoelous and transition along the series to amphicoelous in the most posterior cervicals and anterior dorsals 50 – 52 , 54 , 67 , 68 . All preserved mid-posterior cervical centra of Elaphrosaurus are amphicoelous 41 . Following these observations, the amphicoelous centrum and reduced inclination of the articular surfaces of LRF 3050.AR indicates a placement in the middle or posterior region of the neck. The distortion of the centrum, in particular the exaggerated offset of the articular surfaces resulting from taphonomic compression, precludes a more accurate placement of the centrum.
Among ceratosaurs, the dimensions of LRF 3050.AR are most similar to the anterior cervical series of the abelisaurid Viavenator exxoni . However, as noted above, the anterior cervical series in Viavenator and other abelisaurids consists of opisthocoelous centra, contrary to the amphicoelous condition in LRF 3050.AR. Unfortunately, direct comparisons of the centrum proportions of LRF 3050.AR are complicated by the strong taphonomic dorsoventral compression of the specimen. However, when the anterior half of the cervical centra are excluded, the dimensions of LRF 3050.AR are more similar to the moderately elongate proportions of noasaurids 41 , 68 , 69 than the more robust and anteroposteriorly shortened centra in abelisaurids 51 , 52 or strongly elongate centra in Elaphrosaurus 41 . The anterior and posterior articular surfaces are considerably wider mediolaterally than dorsoventrally tall (Table 1 ). This is similar to the proportions throughout the cervical series of Masiakasaurus knopfleri and Elaphrosaurus 41 , 67 , 68 , but may have been exaggerated by taphonomic distortion.
The preserved floor of the neural canal on the dorsal surface of LRF 3050.AR indicates that it was relatively wide mediolaterally relative to the width of the centrum and were likely wider than the thickness of the walls of the laterally bounding neural arch pedicels (Fig. 2 ). The neural canals in the cervicals of basal neotheropods and most ceratosaurs are narrower with respect to both the centrum and the neural arch pedicels 48 , 50 , 52 , 68 . In contrast, the neural canals of Elaphrosaurus 41 and noasaurids 53 , 68 , 69 are considerably wider relative to the centrum and wider than the thickness of the walls of the neural arch pedicels, as seen in LRF 3050.AR.
The distinct posterior centrodiapophyseal lamina (pcdl) of LRF 3050.AR is remarkably similar to those of noasaurids (Fig. 3 ). In MACN-PV (Museo Argentino de Ciencias Naturales “Bernardino Rivadavia”, Buenos Aires, Argentina) 622, a cervical vertebra initially described as an oviraptorosaur 70 , 71 but which most likely pertains to Noasaurus 53 , the pcdl narrows abruptly from the anteriorly placed diapophyses and contacts the centrum at approximately the anteroposterior midpoint (Fig. 3 ). A similar pcdl also appears to have been present in GSI (Geological Survey of India, Kolkata, India) K20/614, a cervical vertebra ascribed to the Indian noasaurid Laevisuchus indicus 72 . The plesiomorphic condition of a posteriorly contacting pcdl is present in the middle cervicals of Dilophosaurus 50 , abelisauroids 30 , 34 , 54 and also the recently described Brazilian noasaurid Vespersaurus paranaensis 69 . Despite the loss of the posterior portion of the posterior centrodiapophyseal lamina, a medial attachment of the pcdl is most likely to have been present in LRF 3050.AR. A medially positioned pcdl also characterises the middle to posterior cervical series of other ceratosaurs, including Elaphrosaurus 41 , Majungasaurus crenatissimus 52 and Carnotaurus sastrei 51 .
Perhaps the most distinguishing feature of LRF 3050.AR is the mediolaterally concave ventral surface of the centrum delimited by pronounced ventrolateral ridges. In most ceratosaurs, the ventral surface of the cervical centra is flattened or slightly convex, forming a distinct edge at the contact with the lateral surfaces 51 , 68 , 73 . Ventrolateral ridges on cervical centra such as those present in LRF 3050.AR have been reported only in the basal ceratosaurian Elaphrosaurus and the noasaurid Noasaurus 41 , 53 . In Elaphrosaurus , the sharp lateroventrally directed ridges are present only at the posterior part of the centrum 41 , which differs from the condition in LRF 3050.AR in which they are continuous with the parapophysis and extend along almost the entire length of the centrum. Similar ventrolateral ridges have also been reported in MACN-PV 622 53 . Ventrolateral ridges have been described in therizinosaurs and unenlagiine dromaeosaurids 74 – 76 ; however, they are developed only as comparatively weaker and rounded ridges that do not form the sharp edges that are seen in ceratosaurians. In addition, in unenlagiines the ventrolateral ridges transition into well-developed carotid processes at the anterior end of the centra 76 , 77 . This contrasts with the condition in LRF 3050.AR in which carotid processes are absent and the ridges remain sharply defined and contact the parapophyses at the anteroventral margins of the anterior articular surface.
Status of NMV P221202
A ceratosaurian astragalocalcaneum (NMV P221202) was discovered from the upper Barremian–lower Aptian San Remo Member of the upper Strzelecki group in Victoria 15 (Fig. 4 ). NMV P221202 was compared to the only Australian theropod astragali known at the time, namely those of the megaraptorid Australovenator wintonensis 1 and the Australian pygmy ‘ Allosaurus ’ 78 , now considered to also pertain to Megaraptoridae 2 , 8 . The Victorian astragalocalcaneum, NMV P221202, was found to differ from the two Australian megaraptorid astragali, most notably in the co-ossification of the astragalus and calcaneum, the absence of a horizontal vascular groove on the anterior surface of the astragalar body, and the lack of a crescentic groove on the posterior surface of the ascending process 15 . NMV P221202 was referred to Ceratosauria in a phylogenetic analysis, but possible ingroup relationships were not considered with confidence despite similarities with the astragalus of the Madagascan noasaurid Masiakasaurus 15 .
Subsequently, the assignment of NMV P221202 to Ceratosauria was questioned 8 on the basis of five observations: the presence of a distinct eminence on the medial surface of the ascending process and paired oval fossae at the base of the ascending process of the astragalus anteriorly (Fig. 4a ), both of which are present in alvarezsaurids 79 ; a vertical groove on the posterior surface of the ascending process and a lateral constriction of the tibial facet caused by a thickening of the ascending process laterally (Fig. 4c ), both of which are present in megaraptorids; and a prominent posterodorsal notch on the calcaneum for articulation of the tibia (Fig. 4b ), which they considered to be a tetanuran synapomorphy based on the results of a phylogenetic analysis of tetanurans 80 . Based on these observations, it was concluded that NMV P221202 could only be considered an indeterminate averostran 8 . The debate surrounding the affinities of NMV P221202 was commented on briefly in a review of the Victorian Cretaceous polar biota 81 , with no preference stated for either of the two hypotheses.
However, a detailed consideration of these arguments as presented raises a number of problems. Firstly, as previously noted 8 , the ascending process of the astragalus in alvarezsaurids differs markedly from the condition present in NMV P221202. As is typical for coelurosaurs, the base of the ascending process in alvarezsaurids occupies almost the entire width of the astragalus 63 , 79 . Furthermore, in alvarezsaurids with the exception of Patagonykus puertai , the medial surface of the ascending process is excavated by a deep notch, leaving only a low medial portion of the ascending process and a taller narrow lateral portion 63 , 65 , 82 – 84 . However, in NMV P221202 the ascending process is parallel-sided at the base, was likely subrectangular in its original form, and its base spans only the lateral two-thirds of the astragalus. In addition, contrary to previous remarks 8 , no medial eminence of the ascending process that resembles that of NMV P221202 is present in either Patagonykus or Mononykus olecranus . In the former taxon, the medial edge of the ascending process is smoothly sinusoidal in anterior view with no noticeable eminences 79 , whereas the medial edges of the medially-notched ascending processes of Mononykus and other alvarezsaurids are straight or slightly concave, with no noticeable eminences 63 , 65 . Secondly, as noted in the original description of NMV P221202 15 , and contrary to previous observations 8 , there is no groove on the posterior surface of the ascending process similar to those that have been reported in megaraptorids. The lateral edge of the posterior surface of the base of the ascending process in NMV P221202 is slightly elevated with respect to the area immediately lateral to an abraded area of periosteum that may have given the appearance of a grooved surface. However, this is markedly different from the well-defined crescentic groove present on the posterior surface of the ascending process in megaraptorid astragali 1 , 14 , 78 . Thirdly, the lateral side of the tibial facet of the astragalus in the abelisaurid Majungasaurus is also constricted relative to the medial side 85 , indicating that this feature is not restricted to megaraptorids as previously asserted 8 and that abelisauroid affinities cannot be dismissed. Finally, tibial facets on the calcaneum have been observed in Dilophosaurus , Majungasaurus , Elaphrosaurus , Ceratosaurus and Masiakasaurus 41 , 67 , 85 , 86 , indicating that this feature is diagnostic of Averostra, a more inclusive group than stated previously 8 .
Phylogenetic analysis
The phylogenetic analysis including LRF 3050.AR and NMV P221202 (see Methods and Materials for details) returned 217 most parsimonious trees of 4293 steps (CI: 0.306, RI: 0.512). The strict consensus tree resolves both Australian specimens within Noasauridae (Fig. 5 ). The synapomorphies diagnosing Noasauridae include a spur on the medial surface of the ascending process of the astragalus (858:1), mediolaterally thin cervical epipophyses (1272:1), cervical postzygapophyses swept back posteriorly and surpassing the posterior end of the vertebral centra (1083:1), smooth medial surfaces of the anteromedial process of the maxilla (915:0), anteroposteriorly shortened palatal shelves of the maxilla (1310:1), paradental plates of the maxilla low and partially obscured by lamina of maxilla (972:1) and shaft of metatarsal II mediolaterally compressed (1208:1). The presence of ventrolateral ridges contacting the parapophyses on the cervical vertebrae (210:1) may represent an additional synapomorphy of Noasauridae. However, the distribution of this character is presently uncertain and so far has only been reported in MACN-PV 622 (cf. Noasaurus ), in addition to LRF 3050.AR. The noasaurid with the most complete cervical series, Masiakasaurus , has flattened ventral surfaces of the centra with no ventrolateral ridges 41 . When Masiakasaurus is coded as such for the aforementioned character, the presence of ventrolateral ridges does not optimise as a synapomorphy of Noasauridae. However, this may be an artifact of the long-standing lack of resolution among noasaurids due to their poor fossil record, and it remains plausible that ventrolateral ridges may represent a synapomorphy of a subclade within Noasauridae. However, more data is needed to thoroughly test this hypothesis.
The presence of a medial eminence on the ascending process is a synapomorphy that pertains directly to NMV P221202. Among theropods, this feature is shared only with Masiakasaurus 68 and represents the strongest evidence in favour of noasaurid affinities for NMV P221202. Unfortunately, the lack of preserved ascending processes in the astragali of other noasaurid taxa precludes detailed comparisons.
If the results presented here are correct, then NMV P221202 and LRF 3050.AR represent novel reports of noasaurids from the late Barremian–early Aptian of Victoria and Cenomanian of New South Wales respectively. Under the taxonomic framework presented here, Noasauridae consists of at least six named taxa: Laevisuchus , Noasaurus and Masiakasaurus from the Maastrichtian of India, Argentina, and Madagascar respectively 44 , 67 , 87 ; Velocisaurus , from the Santonian of Argentina 88 ; Vespersaurus from the Aptian–Campanian of Brazil 69 and Afromimus tenerensis from the Aptian–Albian of Niger, initially described as an ornithomimid 89 but recently reappraised as a probable noasaurid 90 . Genusaurus sisteronis , from the Albian of France, has previously been considered as a noasaurid 22 , but subsequent analyses, including the one presented here, preferred a position within Abelisauridae. Ligabueino andesi , from the Barremian–early Aptian of Argentina 91 , was also originally described as a noasaurid, but phylogenetic studies failed to identify any noasaurid synapomorphies in this taxon 22 , 68 . NMV P221202, which is identified by phylogenetic analysis as a noasaurid, therefore represents the oldest known representative of the clade in the world to date (Fig. 5 ). However, if the broader taxonomic scope of Noasauridae (i.e., inclusive of elaphrosaurines; see Taxonomic Framework) is favoured instead, then NMV P221202 would instead represent the oldest known noasaurine, with the oldest noasaurids represented by the Middle–Late Jurassic aged elaphrosaurines 33 , 41 , 92 . Regardless of their phylogenetic position, the newly described Australian noasaurids expands the known palaeogeographic range of the clade outside of South America, Madagascar and India. Presently, the poor fossil record of Noasauridae, and the corresponding lack of resolution among the known noasaurid taxa, precludes the formation of any novel palaeobiogeographic hypotheses including the newly discovered Australian record of noasaurid theropods. Future discoveries may reveal more detail about the evolution and palaeobiogeographic distribution of this enigmatic clade. | The diversity of Australia’s theropod fauna from the ‘mid’-Cretaceous (Albian–Cenomanian) is distinctly biased towards the medium-sized megaraptorids, despite the preponderance of abelisauroids in the younger but latitudinally equivalent Patagonian theropod fauna. Here, we present new evidence for the presence of ceratosaurian, and specifically abelisauroid, theropods from the Cenomanian Griman Creek Formation of Lightning Ridge, New South Wales. A partial cervical vertebra is described that bears a mediolaterally concave ventral surface of the centrum delimited by sharp ventrolateral ridges that contact the parapophyses. Among theropods, this feature has been reported only in a cervical vertebra attributed to the noasaurid Noasaurus . We also reappraise evidence recently cited against the ceratosaurian interpretation of a recently described astragalocalcaneum from the upper Barremian–lower Aptian San Remo Member of the upper Strzelecki Group in Victoria. Inclusion of the Lightning Ridge cervical vertebra and Victorian astragalocalcaneum into a revised phylogenetic analysis focused on elucidating ceratosaurian affinities reveals support for placement of both specimens within Noasauridae, which among other characters is diagnosed by the presence of a medial eminence on the ascending process of the astragalus. The Lightning Ridge and Victorian specimens simultaneously represent the first noasaurids reported from Australia and the astragalocalcaneum is considered the earliest known example of a noasaurid in the world to date. The recognition of Australian noasaurids further indicates a more widespread Gondwanan distribution of the clade outside of South America, Madagascar and India consistent with the timing of the fragmentation of the supercontinent.
Subject terms | Taxonomic Framework
There are presently two hypotheses regarding the content of Noasauridae and the phylogeny of non-abelisaurid, non-ceratosaurid ceratosaurians. Abelisauroidea was originally considered to include Abelisauridae and Noasauridae, and all ceratosaurs more closely related to them than to Ceratosaurus nasicornis 27 . The earliest phylogenetic analysis of ceratosaurs identified a monophyletic Abelisauroidea following this definition 28 , and which was subsequently expanded to include the African Elaphrosaurus bambergi 29 . Subsequent phylogenetic studies expanded the taxonomic scope of Noasauridae to include small-bodied Late Cretaceous taxa from South America 21 , 30 – 32 and the Jurassic and Cretaceous of Africa 33 , to the exclusion of Elaphrosaurus . This topology has been widely recovered in more recent analyses 21 , 34 – 39 . However, Elaphrosaurus has also been resolved within Noasauridae in other analyses 40 , most notably in the analysis accompanying the recent redescription of the holotype 41 . Under this hypothesis, the subclade Noasaurinae was coined to include ceratosaurs more closely related to Noasaurus leali than to Elaphrosaurus , Ceratosaurus and Allosaurus fragilis , and Elaphrosaurinae was erected to include ceratosaurs more closely related to Elaphrosaurus than to Noasaurus , Abelisaurus comahuensis , Ceratosaurus and Allosaurus 41 . The results of a revised phylogenetic analysis for Limusaurus inextricabilis 42 were used to support a recently proposed phylogenetic framework for Ceratosauria 43 in which Noasaurinae and Elaphrosaurinae were recovered as subclades of Noasauridae. In line with the topology of our phylogenetic tree (see Phylogenetic Analysis), the following descriptions and discussions consider Noasauridae to have the same taxonomic content as Noasaurinae 41 , with members of Elaphrosaurinae representing ceratosaurs basal to Abelisauroidea (i.e., Noasauridae + Abelisauridae).
Systematic Palaeontology
Theropoda Marsh 1881
Neotheropoda Bakker 1986
Averostra Paul 2002
Ceratosauria Marsh 1884
Noasauridae indet. Bonaparte and Powell 1980 44
LRF 3050.AR
Locality
LRF (Australian Opal Centre, Lightning Ridge, New South Wales, Australia) 3050.AR was collected from an underground opal mine at the ‘Sheepyard’ opal field, approximately 40 km southwest of Lightning Ridge in central northern New South Wales (Fig. 1 ). The specimen derives from the Wallangulla Sandstone Member 45 of the Griman Creek Formation. Radiometric dates for the Wallangulla Sandstone Member at Lightning Ridge indicate a maximum depositional age of 100.2–96.6 Ma 46 . LRF 3050.AR was found within a monodominant bonebed of the iguanodontian Fostoria dhimbangunmal 47 . Other faunal components from this accumulation include isolated unionid bivalves (LRF 3051), a testudine caudal vertebra (LRF 3053), a small ornithopod caudal centrum (LRF 3052), and a possible indeterminate theropod ulna (LRF 3054). A complete discussion of the geological setting, sedimentology, age and faunal diversity of the Griman Creek Formation is presented elsewhere 46 .
Description
LRF 3050.AR has been taphonomically altered by erosion, breakage and through preparation. The centrum is markedly flattened dorsoventrally through taphonomic compaction, such that much of the left lateral surface is visible in ventral view. In addition, the dorsal portion of the centrum has been sheared off obliquely. Notwithstanding the dorsoventral compression, the centrum is hourglass-shaped in dorsal-ventral view; the narrowest point occurs approximately one-third of the length from the anterior articular surface (Fig. 2a,b ). In lateral view, the anterior and posterior articular surfaces are oriented obliquely relative to the long axis of the centrum (approximately 20 degrees from vertical; Fig. 2c,d ); however, this appearance is probably a result of the taphonomic compaction and not indicative of their original orientations. The ventral surface of the centrum is markedly concave in lateral view (Fig. 2c,d ). The centrum is slightly more than twice as long anteroposteriorly relative to the width of the posterior articular surface (Table 1 ). The centrum is amphicoelous. The central region of the anterior articular surface is flattened and surrounded laterally and ventrally by a concave rim (Fig. 2e ), whereas the centre of the posterior articular surface is concave and bordered ventrally by a convex rim (Fig. 2f ). The preserved portion of the anterior articular surface is elliptical in anterior view, wider mediolaterally than dorsoventrally tall (Fig. 2e ). Only the ventralmost portion of the left parapophysis is present on the ventrolateral edge of the centrum anteriorly, and which also projects ventrolaterally (Fig. 2a,c,e ). A region of exposed trabecular bone immediately dorsal to the preserved parapophysis indicates the likely size of its attachment to the centrum (Fig. 2c ). An anteroposteriorly oriented lamina is present anterodorsally, extending from the anterior articular surface to approximately one third of the length of the centrum and overhanging the right lateral surface (Fig. 2a,b ). The posterior edge of the lamina is broken, indicating that it likely continued further posteriorly. On the ventromedial surface of this lamina is the eroded remains of a smaller, vertically oriented lamina (Fig. 2d ). The position of this smaller lamina would have been dorsal to the parapophysis, and its vertical and lateral continuation indicates that it would have contacted the diapophysis ventrally. Therefore, this lamina is interpreted as a paradiapophyseal lamina (ppdl; following nomenclature for vertebral laminae of Wilson 48 ). Consequently, the portion of the larger lamina anterior to the ppdl is interpreted as the anterior centrodiapophyseal lamina (acdl), and the posterior portion is interpreted as the remains of the posterior centrodiapophyseal lamina (pcdl). The posterior articular surface is missing the dorsal portion due to erosion, similar to the anterior end, and is elliptical, having a greater mediolateral width than dorsoventral height (Fig. 2f ). A portion of the floor of the neural canal is preserved across the anterior half of the dorsal surface of the centrum (Fig. 2a ). Despite erosion to the dorsal surface of the centrum, the neural canal appears to have been mediolaterally wide, approximately half that of the centrum itself, and considerably wider than the neural arch pedicels as visible from their eroded bases (Fig. 2b ). The ventral surface of the centrum is concave mediolaterally and delimited by well-defined, subparallel ventrolateral ridges that extend as laminae from the parapophyses along nearly the entire length of the centrum, becoming less distinct posteriorly (Fig. 2b ). Two small (~3 mm long) lenticular foramina are present on the posterior half of the centrum (Fig. 2a ). Whether these foramina are pneumatic in origin cannot be determined.
Supplementary information
| Supplementary information
is available for this paper at 10.1038/s41598-020-57667-7.
Acknowledgements
We are indebted to Robert Foster who discovered the ‘Sheepyard’ specimens and Joanne Foster and Gregory Robert Foster who generously donated the specimens under the Australian Government’s Cultural Gifts program. We thank Jenni Brammall, Manager of the Australian Opal Centre, for allowing access to LRF 3050.AR and providing resources to facilitate their study while in Lightning Ridge, and Tim Ziegler of Museum Victoria for making NMV P221202 available for study. TNT is made freely available thanks to a subsidy from the Willi Hennig Society. We thank Stephen Poropat and two anonymous reviewers for their valuable comments that improved the quality of the manuscript. We acknowledge the Yuwaalaraay, Yuwaalayaay and Gamilaraay custodians of country in the Lightning Ridge district, and pay our respects to Elders past and present. This work was supported by an Australian Research Council Discovery Early Career Researcher Award (project ID: DE170101325) to P.R.B.
Author contributions
S.A.B. designed the research, performed the descriptive and comparative studies, analysed data, prepared figures and performed the phylogenetic analysis; E.S. and P.B. contributed specimen photographs and data; S.A.B., E.S. and P.B. wrote the paper.
Data availability
All data generated or analysed during this study are included in this published article (and its Supplementary Information).
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:34:59 | Sci Rep. 2020 Jan 29; 10:1428 | oa_package/16/47/PMC6989633.tar.gz |
||
PMC7168315 | 32059467 | 1. Introduction
Biofilms—aggregations of densely packed microbial cells embedded inside exopolysaccharide (EPS) matrix—are a major challenge in public health management. The EPS matrix provides a protective barrier to the biofilm, making them recalcitrant to antimicrobial agents and host defenses [ 1 ]. Staphylococcus aureus is thought to be one of the major causes of nosocomial infections, globally. While the planktonic counterpart is limited to bacteremia and skin abscesses, more chronic infections such as cystic fibrosis, osteomyelitis, and endocarditis are associated with its biofilm mode of growth [ 2 , 3 ]. To eradicate this problem, many strategies were proposed, including (i) the inhibition of primary bacterial adhesion or attachment to the living or non-living surfaces; (ii) the disruption of biofilm architecture during maturation processes; and (iii) the inhibition of cell to cell communication—i.e., quorum sensing [ 4 , 5 ]. Chrysin (5, 7-dihydroxyflavone), a flavone constituent of the Orocylumineicum vent has already been documented for its anticancer, antioxidant and antibacterial properties [ 6 , 7 ]. In spite of its many biological activities, its low water solubility, poor biosorption in the intestinal lumen, low bioavailability and rapid metabolism in the body limits its therapeutic applications [ 8 ]. In this regard, a reduction in particle size may serve as a potential means for enhancing the solubility and dissolution of chrysin [ 9 ].
Nanocarriers have the ability to inhibit bacterial growth and biofilm formation, and are increasingly being used as an attractive tool to combat chronic infections [ 10 ]. They aid in increasing the efficacy of the drug by acting as a protective barrier against enzymatic hydrolysis, increase the biosorption efficacy of the drug in the intestinal lumen, increase solubility and also cause sustainable release [ 11 , 12 ]. In recent years, chitosan is widely being used as a nano-carrier due to its non-toxicity, biocompatibility, immunostimulating and mucoadhesive properties [ 13 ]. Chitosan is a cationic heteropolysaccharide composed of the β-(1,4) linked repeating unit of glucosamine (GlcN) and N-acetylglucosamine (GlcNAc), extracted by the partial alkaline N-deacetylation of chitin found in the exoskeleton of crustaceans [ 14 , 15 ]. The antimicrobial and anti-biofilm potential of chitosan and its nano-derivatives were reported against various microorganisms such as Listeria monocytogenes , Bacillus cereus , Enterococcus faecalis, etc. [ 1 , 16 ]. Chrysin-encapsulated chitosan nanoparticles (CCNPs), synthesized using the ionic gelation method, were characterized and evaluated for their anti-biofilm activity against S. aureus . | 2. Materials and Methods
2.1. Materials
Chitosan (75–85% deacetylated), sodium tripolyphosphate (TPP) and Chrysin were procured from Sigma-Aldrich. The test stain, S. aureus (MCC 2408) was purchased from Microbial Culture Collection (MCC), Pune, India.
2.2. Synthesis of Chrysin-Loaded Chitosan Nanoparticles
Medium molecular weight chitosan (0.2%, w / v ) was mixed with an aqueous solution of acetic acid (0.1%, v / v ) and incubated over-night at 60 °C with continuous agitation. A stock of 5 mg/mL of chrysin dissolved in DMSO was used to prepare the nanoparticle formulation. An aliquot of chrysin prepared in DMSO was added to the chitosan solution (pH 4.8). Subsequently, 40 mL of TPP solution (0.2%, w / v ) was dispensed dropwise into the chitosan–chrysin solution and kept under continuous agitation at 1000× g for 30 min. The ratio of the chitosan-TPP was maintained at 5:1 [ 13 ] with the final chrysin concentration of 50 μg/mL. The nanoparticles formed were concentrated by centrifuging the suspension for 20 min at 12,000× g , washed with MilliQ water to remove the unbound chrysin and dried at room temperature for further studies [ 17 ].
2.3. Physical Characterization of Nanoparticles
The nanoparticles (NPs) were subjected to dynamic light scattering (DLS) to determine the mean hydrodynamic diameter (MHD) and polydispersity index (PDI). The FTIR spectrum was recorded in the range of 4000–500 cm −1 . A transmission electron microscope (TEM) was used to determine the morphology and size of the CCNPs [ 18 ].
2.4. Determination of the Loading Efficiency and Drug Release of Chrysin-Loaded Chitosan NPs
The amount of chrysin loaded in the nanoparticles was determined using a UV-Vis spectrophotometer. After the collection of NPs from the reaction mixture, the absorbance of the supernatant was recorded at 348 nm and the concentration of unbound chrysin was estimated based on the standard curve of chrysin [ 13 , 19 ]. The CCNPs were dislodged in a medium constituting PBS and DMSO (co-solvent, 1%, v / v ) and incubated with gentle agitation (100× g ) at 37 °C. Two milliliters of the sample was retrieved at regular intervals, centrifuged at 10,000× g and the absorbance of the supernatant was recorded at 348 nm. The cumulative chrysin released in the medium was determined at the every 2 h interval, with reference to the standard curve of chrysin [ 20 ]. Chrysin release (%) = (Chrysin released in the supernatant/loaded chrysin concentration) × 100.
2.5. Determination Sub-Minimum Inhibitory Concentration (Sub-MIC) of Chrysin-Loaded Chitosan NPs
The minimum inhibitory concentration (MIC) of CCNPs was determined using macro-broth dilution assay (Clinical and Laboratory Standards Institute Guidelines, 2006). Two-fold dilutions of the NPs were prepared in Muller-Hinton broth to achieve a final concentration ranging from 8 μg/mL to 1024 μg/mL. An overnight culture of S. aureus (100 μL) was added in each NPs suspension and incubated at 37 °C for 24 h. The test tubes were observed for visible signs of growth and the spectrophotometric readings were recorded at 600 nm [ 12 ].
2.6. In Vitro Anti-Biofilm Assays of Chrysin-Loaded Chitosan NPs
The effect of sub-MIC of CCNPs of the on the biofilm formation of S. aureus was evaluated relative to chitosan NPs (CNPs) and Chrysin.
2.6.1. Microtiter Plate (MTP) Assay for Biofilm Disruption and Inhibition
MTP assay for biofilm disruption and inhibition was performed according to Mu et al. [ 1 ]. An over-night culture of S. aureus (100 μL) was transferred into the wells of 96-well flat-bottomed polystyrene plates. After incubation at 37 °C for 24 h, the wells were washed with 100 μL of 0.9% ( w / v ) NaCl to remove the unadhered cells. The biofilm formed was further incubated after adding 90 μL tryptone soy broth (TSB) supplemented with the sub-MIC concentration of CCNPs for another 24 h. The biofilm attached at the bottom of each well was fixed with 100 μL of absolute methanol for 15 min and subsequently treated with 100 μL of crystal violet (0.2% w / v ). Control samples were maintained with S. aureus culture alone. Cultures of S. aureus treated with DMSO serve as a control. The dye attached to the biofilm was further solubilized in 150 μL of glacial acetic acid and the optical density was recorded at 595 nm. S. aureus culture (90 μL) grown in TSB was seeded into individual wells of microtiter plates in the presence of sub-MIC of CCNPs and incubated at 37 °C for 24 h. The planktonic cells were discarded and the MTP was stained with crystal violet (0.2% w / v ). The inhibition of biofilm formation was determined by solubilizing the CV attached to the biofilm and measuring the optical density at 595 nm. Biofilm inhibition/disruption was quantified using the following formula:
% Biofilm inhibition/disruption= ([OD 595 of control − OD 595 of test] = OD 595 of control) * 100
2.6.2. Microscopic Examination of Biofilm
Reduction in the biofilms of S. aureus was observed using confocal laser scanning microscopy (CLSM). S. aureus biofilms was allowed to grow on glass coverslips (18 × 18 mm) placed in 12 well polystyrene plates containing TSB supplemented with CCNPs (sub-MIC) and incubated overnight at 37 °C. The glass coverslip was washed with sterile distilled water, stained and processed accordingly [ 21 ]. The biofilm formed was fixed using methanol and treated with crystal violet (0.2% w / v ). The coverslip was subsequently washed, air dried and observed using a light microscope (40×). The biofilm formed on the coverslip was washed 0.01 M phosphate buffer saline (PBS), stained using acridine orange (0.2% w / v ) for 1 min and observed using Confocal laser microscope at 20×. The 3D image was recorded and Z stacks were prepared to determine the effect of the CCNPs on the thickness of the biofilm [ 22 ].
2.6.3. Exopolysaccharide (EPS) Quantification and Microbial Adhesion to Hydrocarbon (MATH) Assay
Production of EPS by S. aureus was quantified in presence and absence of CCNPs by total carbohydrate quantification method. S. aureus was grown in presence and absences of CCNPs were harvested by centrifugation (10,000× g for 2 min). The cell pellet was washed and suspended in 200 μL of sterile PBS to which an equal volume of 5% ( v / v ) phenol and 5× volume of concentrated sulfuric acid containing 0.2% ( w / v ) hydrazine sulphate was added. The tubes were incubated in dark for 1 h followed by centrifugation at 10,000× g for 10 min. The supernatant was aspirated and the optical density was measured at 490 nm [ 21 ]. A reduction in EPS production was quantified using the following formula:
The effect of CCNPs on the cell surface hydrophobicity of S. aureus was evaluated using MATH assay. The optical density of the treated CCNPs and the untreated cell suspension was recorded at 600 nm, after 24 h of incubation. The bacterial suspension was mixed with toluene (1 mL) and vortexed for 2 min. The optical density of the aqueous phase was measured at 600 nm. In both the assays, control samples were maintained with S. aureus culture only. Cultures of S. aureus treated with DMSO served as a negative control [ 21 ]. The percentage of inhibition in hydrophobicity is measured as follows;
2.6.4. Growth Curve Analysis
An overnight culture of S. aureus was diluted with LB medium until the optical density of the cell suspension reaches 0.05 at 600 nm. The suspension was then supplemented with chrysin and NPs separately and incubated overnight at 37 °C at 100× g . The cell suspension (1 mL) was withdrawn and the optical density was measured at 600 nm, at every 2 h interval [ 23 ].
2.7. Statistical Analysis
All the assays were repeated thrice, and the data are presented as mean ± standard error. Significance among treatments were investigated using one-way ANOVA and represented with a statistical significance of p ≤ 0.05. Significance in treatments of chrysin, chitosan and CCNPs are represented with asterisk sign. Non-significant groups are represented by NS. | 3. Results
3.1. Synthesis and Characterization of Chrysin-Loaded Chitosan NPs
The CCNPs were synthesized using the ionotropic gelation method, using TPP molecules as a linker. The ratio of chitosan and TPP used is one of the factors that influence the aggregation of nanoparticles. Chitosan/TPP in the ratio of 5:1 was found to be the best formulation for the synthesis of CCNPs. The mean hydrodynamic diameter of the synthesized CNPs and CCNPs were found to be ~299 nm and ~355 nm, respectively. CNP and CCNP nanoparticles showed an intermediate polydisperisty index of 0.434 and 0.487, respectively ( Figure 1 a,b). However, the CCNPs were found to be spherical with sizes ranging from 130–341 nm as indicated by the TEM micrograph ( Figure 1 c). On comparing the functional groups present in the CCNPs with their bulk counterparts, chitosan showed characteristic peaks at 3418 and 3238 cm −1 , which indicated the O-H stretching and N–H stretching vibration, the characteristic peak at 2908 cm −1 depicted the C–H stretch, 2342 cm −1 (C–N band stretching), 1610 cm −1 (amide II band), 1024 and 1051 cm −1 indicated that the CH 2 group and C–O stretch from glucosamine residue was observed. Chrysin showed characteristic bands at 2625 cm −1 , 2343 cm −1 indicating O–H stretching vibration and intramolecular H-bond ( Figure 1 d). The characteristic peaks of both chitosan (at 2342 cm −1 , 1051 cm −1 ) and chrysin (at 2625 cm −1 and 2343 cm −1 ) were observed in the CCNPs. A slight band shift was also observed at 3347 and 3200 cm −1 to the lower wave-number that indicated the presence of hydrogen bonding between O-H group of chrysin and O-H or -NH 2 group of chitosan [ 24 ]. The other peaks observed in the loaded nanoparticles were the P–O bending peak at 890 cm −1 and at 2625 cm −1 , which was broader compared to pure chrysin, indicating an increase in hydrogen bond interactions [ 13 ].
3.2. Loading Efficiency and Release Kinetics of Chrysin-Loaded Chitosan NPs
The amount of chrysin loaded onto the CCNPs was found to be 80.86 ± 0.30%. The in vitro drug release profile of chrysin from the CCNPs was determined in a release medium constituting PBS and DMSO, at 37 °C. The pH of the media was set at 7.4, as the ionic strength of the media plays a vital part in the stability and drug release. The cumulative chrysin release as a function of time is depicted in Figure 2 a. The drug release kinetics of the loaded NPs initially showed a burst release which was followed by a steady and sustainable release from the 8th h. The first burst release was observed in the first two hours with 36.33 ± 1.58% of chrysin release. The second burst was observed after the sixth hour with 80.11% ± 0.84% drug release. The graph takes the form of a plateau starting from the 10th h to 24th h. It was also revealed that about a total of 90.5% ± 0.50% of the chrysin was released from the NPs within 10 h.
3.3. Minimum Inhibitory Concentration (MIC) and Sub-MIC of Chrysin-Loaded Chitosan NPs
The MIC value of the CCNPs was determined to be 1024 μg/mL for S. aureus . At 768 μg/mL, the NPs that did not exert any effect on the growth of the test bacteria. Hence, 768 μg/mL was selected as a sub-MIC concentration and used in all the subsequent anti-biofilm assays.
3.4. In Vitro Anti-Biofilm Activity of Chrysin-Loaded Chitosan NPs
3.4.1. Crystal Violet Staining Assay for Biofilm Formation and Disruption
The CCNPs showed a reduction in biofilm formation as compared to its bulk counterparts. The biofilms were inhibited to 50.48 ± 2.42% and 54.1 ± 0.56 % on treatment with CNPs and chrysin, respectively ( Figure 2 b). However, the CCNPs inhibited the biofilm formation to 66.59 ± 3.09%. The treatment of the preformed biofilm with CCNPs also resulted in a reduction in the biofilm mass of 43.50 ± 1.29%, whereas a decrease in biofilm of 14.92 ± 2.17% and 20.94 ± 3.73% was observed in the presence of CNPs and chrysin, respectively ( Figure 2 c).
3.4.2. Microscopic Examination of Biofilm
Light microscopy and CLSM were used to observe the change in the biofilm architecture of S. aureus in presence and absence of CCNPs. The influence of CCNPs on the thickness of biofilm, overall structure and biofilm density was evident in the micrographs. However, a dense biofilm was visible in the light microscope images of the untreated samples ( Figure 3 ). A thick biofilm of 80 μm was observed in the control while the thickness was reduced to 20 μm on treatment with chrysin. However, a higher reduction in the thickness of the biofilm matrix to 16 μm was achieved in the presence of CCNPs ( Figure 3 ).
3.4.3. Exopolysaccharide (EPS) Quantification and Microbial Adhesion to Hydrocarbon (MATH) Assay
The CCNPs showed better reduction in the synthesis of EPS compared to its bulk counterparts. On treatment with CCNPs, a reduction in EPS production of 38.03 ± 5.41% was observed ( Figure 4 a). Chrysin and CNPs were also able to restrict the production of EPS by 33.37 ± 4.84% and 26.54 ± 3.20%, respectively. Cell surface hydrophobicity (CSH) is another important factor in biofilm formation as it aids in the adherence of the cell to the substratum. The CCNPs reduced the CSH in S. aureus by 84.66 ± 2.84% as compared to its bulk counter parts ( Figure 4 b). CNPs and Chrysin showed approximately 61.28 ± 5.78% and 72.46 ± 4.21% decreases in cell surface hydrophobicity, respectively.
3.4.4. Growth Curve Analysis
The growth pattern of the test organism when cultivated in the presence and absence of the CCNPs and CNPs is presented in Figure 5 Though the cells exhibited retardation in growth on exposure to a sub-MIC of the NPs and Chrysin, there was no significant decrease in the cell density. Hence, it can be inferred that the CCNPs arrested the biofilm development in S. aureus , indicating the potential application of CCNPs in the management of S. aureus -related infections. | 4. Discussion
The formation of biofilm is one of the major obstacles in the modern antibacterial therapy. The biofilm-forming ability of bacteria provides the pathogen with advantages by blocking the entry of antimicrobial agents, thus causing hindrance in the clearance of these pathogens by the host immune system. Drug nanonization—i.e., a reduction in the particle size of drugs to nano-size—enhances the intracellular uptake of nanoparticles thus, providing a way to overcome the problems associated with insoluble drugs. Moreover, an increase in the surface area of poorly soluble drugs also leads to a more pronounced increase in the therapeutic index by maximizing the action with lesser dose [ 25 ]. In the present study, the anti-biofilm activity of CCNPs was demonstrated against biofilm forming bacterium, S. aureus MCC 2408. Chrysin was encapsulated to chitosan using the ionotropic gelation method. The NPs were formed due to the electrostatic interaction between amine group of chitosan and polyphosphate ions of TPP [ 19 ]. The hydrodynamic size influences various properties such as loading efficiency, drug release kinetics and the stability of the NPs. Though smaller nanoparticles due to high surface area show greater encapsulation efficiency, however, it also tends to aggregate easily on storage [ 19 ]. Particles of a low polydispersity index (PDI) are homogenous in nature and provide maximum stability. High polydispersity index (PDI) indicates the heterogeneity of the nanoparticles in the mixture. The synthesized spherical nanoparticles showed intermediate polydispersity that aids in the stability of the CCNPs. Ilk et al. [ 19 ] reported the synthesis of kaempferol loaded chitosan/TPP nanoparticles with an average particle size of 192.27 nm. The FTIR spectra of CCNPs indicated the presence of a similar functional group as that of its bulk counterpart indicating the successful encapsulation of chrysin with chitosan and the formation of chrysin-loaded chitosan NPs.
The biological efficacy of a drug and its potential use in drug delivery is directly influenced by the loading efficiency and controlled release. The CCNPs demonstrated a high encapsulation efficiency with a sustainable release. The CCNPs showed a higher loading efficiency compared to the previously synthesized nanocomposites such as the BSA-loaded chitosan-TPP nanoparticle with a loading efficacy of 60% [ 13 ] and PEG-chrysin conjugates with a loading efficacy of 55.6% [ 26 ]. The high encapsulation efficiency of CCNPs may be attributed to the presence of hydrogen bond between the -OH group of chrysin and the -NH2 group of chitosan that help in better entrapment of chrysin into the CNPs. From these release kinetics, it can be interpreted that chrysin was not covalently bonded to the nanoparticle and was thus easily released when dislodged in the medium. A similar result was observed in kaempferol-loaded chitosan nanoparticles, where more than 85% of the drug release was attained within 4 h, and no significant quantity of the drug was released thereafter [ 19 ].
Biofilm formation by S. aureus is associated with many nosocomial as well as chronic diseases associated with medical devices and surgical implants. It also leads to the emergence of the multi-drug resistant (MDR) strains viz. Methicillin-resistant S. aureus (MRSA) and Vancomycin-resistant S. aureus (VRSA) [ 3 ]. It was found that both CNPs and chrysin exhibited significant anti-biofilm activity relative to the untreated control. However, the anti-biofilm efficacy was comparatively enhanced when chrysin was encapsulated with chitosan. The ability to attach and establish biofilm on inert surfaces contributes to making S. aureus a major pathogen of chronic infections [ 10 ]. The data also suggested that the CCNPs showed better biofilm inhibition ability than disruption of preformed biofilm. Shi et al. [ 10 ] suggested that chitosan-coated iron oxide nanoparticles have the potential to effectively prevent bacterial colonization and control the biofilm formation by 53% in S. aureus . The anti-biofilm efficacy of the NPs was also validated by light and CLSM micrographs which showed a reduction in thickness and density of the biofilm matrix in presence of CCNPs.
The EPS matrix plays an indispensable role in the initial cell attachment, the formation of biofilm architecture and in providing mechanical stability of the biofilm. The EPS produced by the biofilm-forming bacteria prevents the access of antimicrobial agents and antibiotics to the bacterial cell [ 18 ]. The CCNPs caused a considerable decrease in the EPS production and cell surface hydrophobicity in S. aureus which resulted in the decrease in bacteria accumulation and attachment to the substratum. Hence, it can be inferred that CCNPs have a profound effect on the early stages in biofilm formation, specifically in the adherence and colonization as compared to its bulk counterparts, chrysin and chitosan.
From the growth curve analysis, it can be interpreted that at the sub-MIC level, the CCNPs exerted less bactericidal effect and selective pressure against S. aureus . However, it had a profound effect on the S. aureus in the biofilm mode of growth. Chrysin and chitosan are found to be nontoxic in recommended concentrations. It was reported that the recommended daily concentration of this flavone is 0.5 to 3 g [ 27 ]. Likewise, chitosan nanoparticles are nontoxic at low concentrations and found to be toxic only at higher concentrations [ 28 ]. The nontoxic nature of these components may enable the application of CCNPs for biomedical applications. As these the outcome of the study suggested that the nanoformulation of chrysin exhibits enhanced synergistic anti-biofilm activity against S. aureus when compared to its bulk counterparts—chrysin and chitosan taken separately. Hence, CCNPs may be considered as a potential therapeutic agent for controlling biofilm formation in S. aureus . The nanocomposites may be further exploited towards the development of anti-biofilm coatings. | 5. Conclusions
This study displayed an enhanced antibiofilm activity of chrysin against S. aureus when loaded on to chitosan-TPP nanoparticles, with a profound loading capacity. Chrysin-loaded chitosan nanoparticles were characterized to confirm the effective loading of the flavone on chitosan nanoparticles. Anti-biofilm activities of CCNPs were determined through biofilm inhibition, biofilm disruption, EPS reduction and hydrophobicity reduction assays. CCNPs synthesized could be used as a potential therapeutic agent for controlling biofilm formation in S. aureus in the future. The nanocomposites may be further exploited towards the development of anti-biofilm coatings. | The application of nanotechnology in medicine is gaining popularity due to its ability to increase the bioavailability and biosorption of numerous drugs. Chrysin, a flavone constituent of Orocylumineicum vent is well-reported for its biological properties. However, its therapeutic potential has not been fully exploited due to its poor solubility and bioavailability. In the present study, chrysin was encapsulated into chitosan nanoparticles using TPP as a linker. The nanoparticles were characterized and investigated for their anti-biofilm activity against Staphylococcus aureus . At sub-Minimum Inhibitory Concentration, the nanoparticles exhibited enhanced anti-biofilm efficacy against S. aureus as compared to its bulk counterparts, chrysin and chitosan. The decrease in the cell surface hydrophobicity and exopolysaccharide production indicated the inhibitory effect of the nanoparticles on the initial stages of biofilm development. The growth curve analysis revealed that at a sub-MIC, the nanoparticles did not exert a bactericidal effect against S. aureus . The findings indicated the anti-biofilm activity of the chrysin-loaded chitosan nanoparticles and their potential application in combating infections associated with S. aureus . | Acknowledgments
We are grateful to the Central Instrumentation Facility (CIF), Pondicherry University for the DLS and FTIR analysis. We would like to thank the Sophisticated Test and Instrumentation Centre (STIC), Cochin for the TEM analysis. We are also thankful to Bharathidasan University, Tiruchirappalli for supporting us with the Confocal Laser Scanning Microscopy. The authors extend their appreciation to The Researchers supporting project number (RSP-2019/15) King Saud University, Riyadh, Saudi Arabia.
Author Contributions
Conceptualization, B.S.; methodology, B.S., U.P., and A.S.; software, B.S., U.P., A.S., and R.P.; validation, B.S., and R.P.; formal analysis, B.S.; investigation, B.S., A.M.E., and A.H.B.; resources, B.S., A.S., and K.K.; data curation, B.S., U.P., R.P., K.K., and A.S.; writing—original draft preparation, B.S., U.P., R.P., A.S., and K.K.; writing—review and editing, B.S., R.P., A.S., and K.K.; visualization, B.S., A.M.E., and A.H.B.; supervision, B.S.; project administration, A.S.; funding acquisition, A.S. All authors have read and agreed to the published version of the manuscript.
Funding
The APC was funded by The Researchers supporting project number (RSP-2019/15) King Saud University, Riyadh, Saudi Arabia.
Conflicts of Interest
The authors declare no conflict of interest. | CC BY | no | 2024-01-16 23:36:45 | Pathogens. 2020 Feb 12; 9(2):115 | oa_package/ba/c7/PMC7168315.tar.gz |
|
PMC7408465 | 32610539 | 1. Introduction
Flibanserin (FLB) is a recently FDA-approved nonhormonal drug for the treatment of women with hypoactive sexual appetite disorder. FLB acts via decreasing the level of serotonin and increasing the levels of dopamine and norepinephrine for maintaining healthy sexual response [ 1 ]. FLB-treated women have demonstrated significant improvements in both the number of satisfying sexual events and the female sexual function index desire domain score compared placebo-treated ones. These findings proved the ability of the drug to enhance the women’s sexual desire. In addition, administration of FLB was linked with a significant reduction in the distress related with either sexual dysfunction or low sexual desire [ 2 , 3 , 4 , 5 ]. However, the major challenge for oral administration of FLB is the reduced bioavailability (~33%) that might be caused by the drug’s low solubility and its exposure to hepatic first-pass metabolism [ 6 , 7 ].
Recently, intranasal drug administration has gained increasing interest. The nasal pathway is a noninvasive route for active pharmaceutical ingredient (API) administration with the aim of local, systemic, or central nervous system (CNS) action. The nasal cavity represents an ideal absorption surface for drug delivery due to the high vascularity of this area, in addition to the leaky epithelium that results from the low tightness of the intercellular nasal mucosal junctional complex. Furthermore, direct absorption of the molecules from the nasal cavity via the trigeminal and olfactory pathways provides direct entry into the brain and results in a favorable pharmacokinetic/pharmacodynamic profile for centrally acting drugs. Thus, the nasal route could offer an encouraging unconventional approach to enteral and systemic drug administration of CNS-targeting drugs [ 8 , 9 ].
Transfersomes (TRFs), also called deformable or elastic liposomes, are flexible vesicular systems that involve a phospholipid (PL) and an edge activator. They are considered as a modified generation of liposomes and were firstly modified by Cevc and Blume [ 10 ] by adding edge activators. The edge activators are usually a single-chain surfactant which enhances the squeezing and penetration of the vesicles through the mucosal barrier through destabilization of the lipid bilayers. The commonly used edge activators include sodium deoxycholate, sodium cholate, Tween, and Span [ 11 , 12 , 13 ]. Intranasal administration of TRFs has been previously reported to enhance bioavailability of several drugs [ 14 , 15 , 16 ]. Moreover, TRFs have been effectively applied for enhancing brain distribution of centrally acting medicines [ 17 , 18 , 19 ].
Hydrogel-loaded nanoformulated drugs have drawn significant attention as promising nanoparticulate drug delivery systems that combine both hydrogel system properties (e.g., hydrophilicity and high water absorption affinity) and nanoparticulate properties (e.g., ultrasmall size) [ 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 ], can achieve high drug loading without chemical reactions, and are able to release integrated agents at the target site in a controlled behavior. A wide range of natural, naturally derived and synthetic hydrogels can be used for hydrogel-loaded nanoformulated drug preparation [ 26 , 27 , 28 ]. Hydrogels can be prepared from naturally derived protein or polysaccharide polymers [ 29 ]. The synthetic hydrogels have drawn great attention in the biomedical field [ 30 , 31 ]. The synthetic hydrogels are obtained through chemical and physical methods. Among the synthetic hydrogels, poly(2-isopropenyl-2-oxazoline) (PiPOx) is a biocompatible polymer synthesized using a simple protocol [ 30 ]. In addition, poly(vinyl alcohol) (PVA) and PVA/poly(ethylene glycol) (PEG) hybrid hydrogels were synthesized that showed improved mechanical strength when compared with PVA hydrogel [ 32 ].
Among natural and naturally derived hydrogels, the most frequently used are polysaccharides. Materials with polysaccharides can be divided into two groups, namely polyelectrolytes and non-polyelectrolytes. Additionally, polyelectrolytes may be classified according to their intrinsic charges, including cationic (chitosan), anionic (alginate, heparin, pectin, hyaluronic acid), and neutral (pullulan, dextran). Due to their desirable mucoadhesive properties, cellulose derivatives can significantly extend the residence time of drugs in the nasal cavity [ 33 ]. Furthermore, due to their high viscosity following hydration in the nasal cavity, celluloses can sustain the release of drugs. For these reasons, the use of cellulose as an absorption enhancer can lead to improved intranasal absorption and increased bioavailability [ 34 ]. Reports show that celluloses increase the intranasal bioavailability of both small hydrophobic and hydrophilic macromolecular drugs [ 35 ]. Hydroxypropyl methyl cellulose (HPMC) is a popular matrix material in controlled drug delivery systems, and HPMC matrices show a sustained release pattern by two mechanisms, i.e., diffusion and erosion of the gel layer [ 36 ]. The viscosity of the polymer affects the diffusion pathway. HPMC can be employed as a matrix for controlling the release of both hydrophilic and hydrophobic drugs [ 37 ]. HPMC-based gels showed good surface morphology with high drug loading efficiency. The viscosities of the preparations were found to be within a suitable range for nasal administration.
Therefore, the main aim of this study was to acquire an optimized FLB-TRF-loaded HPMC-based hydrogel for an improved drug delivery to the brain via intranasal administration. Box–Behnken design was utilized for FLB TRF optimization. The effects of FLB-to-PL molar ratio, edge activator hydrophilic lipophilic balance (HLB), and pH of hydration medium on vesicle size were studied. The optimized TRFs with minimized vesicle size were prepared and fused into hydroxypropyl methyl cellulose based hydrogel. The prepared hydrogel was assessed for shape characteristics and ex vivo permeation. In addition, in vivo performance was evaluated after intranasal administration in Wistar rats. | 2. Materials and Methods
2.1. Materials
Flibanserin (FLB) was purchased from Qingdao Sigma Chemical Co., Ltd. (Qingdao, China); Phospholipon 90G (phosphatidyl choline from soy, at least 90% purity) was purchased from Lipoid GmbH (Frigenstr, Ludwigshafen, Germany); Span 65, Span 80, methanol, and chloroform were purchased from Sigma-Aldrich Co. (St. Louis, MO, USA).
2.2. FLB TRF Preparation
FLB TRFs were prepared by the hydration of the formed lipid film as previously described [ 38 ]. Briefly, specified amounts of FLB, PL, and edge activator (surfactant) were dissolved in methanol/chloroform mixture (1:1, v / v ) and subjected to water bath sonication for 5 min. The amounts of FLB, PL, and surfactant were specified according to Table 1 . Span 65 and span 80 were used in different ratios to achieve the required HLB value of the edge activator indicated in the design ( Table 1 ). The solution was then evaporated using a rotary evaporator at 45 °C. The formed film was kept in a vacuum oven overnight for complete removal of solvent residuals. Subsequently, the dried thin film was hydrated with 20 mL of buffer solution, according to the specified pH, for 3 h at 25 °C with gentle shaking.
2.3. Box–Behnken Design for FLB TRF Preparations
According to the previous screening results conducted in our laboratory, the optimization of FLB TRFs was carried out to achieve minimal size. FLB:PL molar ratio ( X 1 ), HLB ( X 2 ), and pH of hydration medium ( X 3 ) were the investigated factors, while vesicle size ( Y 1 ) was the studied response. The X 1 ratios studied were 1:1, 1:3, and 1:5; X 2 values were 2, 4, and 6; and X 3 values were 5, 7, and 9. All other processing and formulation variables, including drug amount (10% w / w ), were kept constant throughout the study. The experimental design using Design-Expert software (version 12; Stat-Ease, Inc., Minneapolis, MN, USA) yielded 17 formulations. The actual values of the independent variables of these runs and the observed responses are presented in Table 1 . The measured responses were statistically analyzed by the analysis of variance (ANOVA) test. The polynomial equations representing the best fitting model for each variable was generated. Three-dimensional surface plots were plotted to illustrate the impact of the variables and interaction between them at p < 0.05. Afterwards, a numerical method following the desirability approach was utilized to predict the optimized FLB TRFs. The predicted formulation was then prepared and further evaluated. The measured responses were compared to the predicted ones, and the residual error was calculated to ensure the success of the optimization process.
2.4. Vesicle Size Determination
The vesicle size of freshly prepared FLB TRF was measured using a Zetasizer Nano ZSP (Malvern Panalytical Ltd. Malvern, UK). The result is expressed as the mean of five determinations.
2.5. Characterization of Optimized FLB TRFs
For investigation of vesicle size, polydispersity index (PDI), and zeta potential of the optimized FLB TRFs, the same method mentioned in Section 2.4 using a Malvern size analyzer was employed. In addition, optimized FLB TRFs were subjected to transmission electron microscopy (TEM). A sample was placed on a copper grid and stained using phosphotungstic acid. After removing excess stain, the stained sample was dried and studied using a JEOL-JEM-1011 transmission electron microscope (JEOL, Tokyo, Japan).
2.6. Preparation of Optimized FLB-TRF-Loaded Hydrogel
Optimized FLB TRFs were incorporated into hydroxypropyl methyl cellulose (HPMC) based hydrogel. Briefly, specified amount of HPMC (0.1 g) was dispersed in distilled (10 mL) water to make a 1% w / v concentration. The gel was kept in the refrigerator overnight and then FLB TRFs were added with continuous stirring to obtain a drug concentration of 10 mg/g. Control hydrogels incorporating raw drug (10 mg/g gel) were prepared under the same conditions for comparison.
2.7. Optimized FLB TRF Gel Ex Vivo Permeation Study
Freshly excised goat nasal mucosa was utilized for ex vivo permeation studies. Mucosa were equilibrated in simulated nasal fluid (SNF) with pH 6.8 for 15 min. SNF was composed of sodium chloride (0.877%), calcium chloride (0.058% w / v ), and potassium chloride (0.298% w / v ) dissolved in deionized water [ 39 ]. Mucosa and gel samples were mounted between the two chambers and the donor chamber [ 40 ]. The area of the chamber of the utilized Franz automated diffusion cell (MicroettePlus; Hanson Research, Chatsworth, CA, USA) was 1.76 cm 2 . Gels loaded with optimized FLB TRF or raw FLB (0.1 g each 10 mg FLB/g gel) were utilized in this study. Seven milliliters of simulated nasal fluid (SNF) with pH 6.8 was used in the receiver chamber as the diffusion medium that was kept at 35 ± 0.5 °C with the agitation rate set at 400 rpm. At specified time intervals, 1.5 mL aliquots were withdrawn and replaced with fresh SNF.
2.8. In Vivo Pharmacokinetic Assessment
The pharmacokinetic performance of the FLB-TRF-loaded hydrogel was investigated in Wistar rats, weighing 200–250 g each, and compared to control raw-FLB-loaded gel. The study protocol was approved by the Research Ethics Committee, Faculty of Pharmacy, King Abdulaziz University, Kingdom of Saudi Arabia, under approval number (PH-124-41). The committee ensures that animal use complies with the European Union Directive 2010/63/EU and the DHEW Publication NIH 80-23 Guiding Principles. The study included two animal groups (I and II), with all animals receiving FLB dose of 10 mg/kg intranasally. Group I received raw FLB gel, and group II received FLB-TRF-loaded hydrogel. Collection of blood samples was performed at specified time intervals. Six rats from each group were sacrificed at each time interval, and the whole brain was washed with saline after removal and then weighed. Brain tissues were homogenized with phosphate buffer (pH 7.4) at 5000 rpm for 3 min. Plasma and homogenized brain samples were stored at −80 °C prior to analysis [ 41 ].
A volume of 200 μL of plasma sample, along with 200 μL of the brain homogenate, was transferred to a screw-capped test tube, mixed with 50 μL internal standard solution (valsartan, 625 ng/μL) and 1 mL acetonitrile, vortexed for 1 min, and then centrifuged at 5300 rpm for 8 min. An aliquot of the clear supernatant was transferred to a total recovery autosampler vial, and a volume of 7 μL was injected for LC-MS/MS-DAD analysis. The MS system was connected to an Agilent 1200 HPLC system equipped with an autosampler, a quaternary pump, and a column compartment (Palo Alto, CA, USA). The system was equipped with ChemStation software (Rev. B.01.03 SR2 (204)). The IT–MS was controlled using 6300 series trap control version 6.2 Build No. 62.24 (Bruker Daltonik GmbH), and the general MS adjustments were as follows: capillary voltage, 4200 V; nebulizer, 37 psi; drying gas,12 L/min; desolvation temperature, 330 °C; ion charge control (ICC) smart target, 200,000; and max accumulation time, 200 ms. The MS scan range was 50–550 m/z. For quantitative monitoring, single positive molar ion mode was applied at programed time segment, 0–4.0 min, m/z 391.2 [M+H]+ FLB; 4.0–10 min, m/z 436.3 [M+H]+ internal standard. Isocratic elution was conducted at a flow rate of 0.5 mL/min with a mobile system composed of 52% acetonitrile and 48% water containing 0.1% formic acid. FLB content in the assayed samples was quantified with reference to a calibration curve (range of 1–1000 ng/mL). The calibration curves for FLB were assessed using free-drug-plasma and free drug brain homogenate matrixes as a calibration matrix. The stock solutions of FLB and valsartan (InSt) were prepared separately by dissolving 10 mg of each in methanol to obtain a concentration of 0.1 mg/mL. A series of calibrator working solutions of FLB were prepared from its stock solutions by applying a serial dilution technique and using methanol as the diluting solvent. The calibration solutions were prepared by spiking the plasma-free drug with FLB solutions to give a concentration spanning the range of 1.0 to 1000.0 ng/mL of FLB and a fixed InSt concentration of 25 μg/mL. The calibrated solutions were extracted and analyzed by the developed method. The peak area ratios of FLB-to-InSt were found to be linear in the concentration range of 1.0 to 1000 ng/mL of FLB. Pharmacokinetic parameters including the maximum plasma concentration (C max ), time to maximum plasma concentration (T max ), and area under the plasma concentration–time curve (AUC 0–∞ ) were calculated using Kinetica software (Version 4; Thermo Fisher Scientific, Waltham, MA, USA). The parameters were analyzed for significance using SPSS software (Version 16; SPSS Inc., Chicago, IL, USA). Unpaired Student’s t -test was performed on C max and AUC 0–∞ , while the nonparametric Mann–Whitney test was utilized for analysis of T max ; a level of significance of p < 0.05 was set for all investigated pharmacokinetic parameters. For histopathological evaluation, 12 rats were divided into four groups: untreated rats (gp1), rats treated with plain in situ gel without drug (gp2), rats that received FLB drug in the in situ gel (gp3), and rats treated with FLB-Nanostructured lipid carriers (FLB-NLCs, gp4). The same dosing procedure as previously described in the pharmacokinetics study was used. After 8 h, histopathologic analysis was conducted according to the method of Young [ 42 ]. In brief, the head was removed, and the brain and jaw were removed from the head along with any other listed tissues. The nasal cavity was initially fixed in a solution of 10% formalin and then decalcified in a solution of 10% EDTA. The tissue was then placed in 70% ethanol before being embedded in paraffin, sectioned, and stained with hematoxylin and eosin.
2.9. Statistical Analysis
For the in vivo data, the software selected to perform the statistical analysis was GraphPad Prism (San Diego, CA, USA). One-way or two-way analysis of variance (ANOVA), followed by Tukey’s post hoc test, was used for multiple comparisons. Only values of p < 0.05 were considered statistically significant. Each set of experiments was performed at least in triplicate and is reported as means ± SD. For the in vitro Box–Behnken design data, the effects of factors on the response (vesicle size) were statistically analyzed by ANOVA using the Design-Expert software. | 3. Results
3.1. Polynomial Model Selection and Diagnostic Analysis
The observed vesicle size of the prepared TRFs best fitted to the quadratic model based on its highest correlation coefficient (R 2 ) is shown Table 2 . There was a satisfactory agreement between the predicted and adjusted R 2 , indicating that the selected model was valid for analyzing the data. Moreover, an adequate precision value of greater than 4 indicates an adequate signal-to-noise ratio, implying the suitability of the quadratic model to navigate the design space. Diagnostic plots were generated to ensure the goodness of fit of the chosen model. Figure 1 A, illustrating the residual vs. run plots, shows randomly scattered points, indicating that there is no lurking variable interfering with the vesicle size. Furthermore, the high linearity illustrated in the predicted versus actual values plot ( Figure 1 B) indicates that the observed vesicle size was analogous to the predicted one.
3.2. Statistical Analysis for the Effect of Variables on Vesicle Size (Y)
The size of vesicles is a critical parameter that exhibits a significant impact on the drugs’ permeation via the biological membranes. FLB TRF showed size in the nanoscale range with mean size ranging from 88 ± 0.86 to 175 ± 2.43 nm ( Table 1 ). The relatively small standard deviation could indicate homogeneity of the TRF dispersions. The equation representing the selected sequential model was generated in terms of coded factors as follows:
The statistical analysis revealed that all the linear terms corresponding to the three investigated variables have a significant negative effect on FLB TRF size ( p < 0.05). The quadratic terms corresponding to the surfactant HLB ( X 2 2 ) and hydration medium pH ( X 3 2 ), in addition to the interaction term X 2 X 3 corresponding to the interaction between the two aforementioned variables, were also found to be significant at the same significance level. Figure 2 illustrates the contour plots for the investigated variable effects on vesicle size.
3.3. FLB TRF Optimization
The formation of the optimized FLB TRFs was accomplished using a numerical optimization technique with a minimized vesicular size. The optimized formulation was prepared at factor levels of 1:1.12 FLB:PL molar ratio, HLB value of 2.3, and hydration medium pH of 7.2. The observed and predicted values of the optimized FLB TRF formulation were in good agreement (with low error percentage), confirming the reliability of the optimization process ( Table 2 ).
3.4. Charactarization of the Optimized FLB TRFs
The PDI of the optimized formulation was found to be 0.201 ± 0.012, while the zeta potential was equal to 8.12 ± 1.54 mV. TEM has been applied for assessing of the shape and lamellarity of the optimized FLB TRF at 25,000× magnification. As illustrated in Figure 3 , the TRF showed vesicles with spherical shape. No aggregation was observed. In addition, the recorded size was within an acceptable agreement with that recorded using the dynamic light scattering technique of the particle size analyzer.
3.5. Optimized FLB TRF Gel Ex Vivo Permeation
Ex vivo permeation through goat nasal mucosa was carried out to give an insight into the in vivo performance of the optimized FLB-TRF-loaded hydrogel. Figure 4 illustrates the mean cumulative percent FLB permeated from the TRF-loaded hydrogel (test) compared to FLB-loaded hydrogel (control). The optimized FLB TRF hydrogel shows a significant increase in cumulative percent FLB permeated when compared to raw FLB gel ( p < 0.05), with almost complete drug permeation after 4 h. The maximum amount of drug permeated within 4 h from optimized FLB TRF hydrogel was approximately 1.97-fold greater than that from raw FLB hydrogel.
3.6. In Vivo Pharmacokinetics
The calibration curves of the concentrations of FLB spiked in plasma and brain homogenate show linear relationships with correlation coefficients of 0.9992 and 0.9984, respectively. The assay shows an adequate precision, with relative standard deviations (RSDs) of 8.1–10.9% and 10.1–12.9% for the intraday assay and the interday assay, respectively. The mean extraction recoveries were 94.8% ± 5.4% and 92.6% ± 7.6% for FLB-spiked plasma and brain samples, respectively. Mean FLB concentrations in plasma and brain following intranasal administration of optimized FLB-TRF-loaded hydrogels, compared to the control FLB-loaded hydrogels, are graphically represented in Figure 5 . The computed pharmacokinetic parameters are compiled in Table 3 .
From the results of the histopathological evaluation to follow the impact of FLB TRFs on the nasal tissues ( Figure 6 A–D), no pathological signs of epithelial damage, hyperplasia, edema, or inflammatory infiltration can be see for the four investigated groups. | 4. Discussion
The nanoscale size observed could contribute to enhancing the drug permeation via the nasal mucosa and facilitating passing through the blood–brain barrier. Analysis of variance (ANOVA) for the vesicle size affirmed that the quadratic model was significant ( p < 0.0001). The positive sign of the coefficients of the linear terms X 1 and X 2 indicates that the vesicle size increases significantly with increasing drug:PL molar ratio and/or surfactant HLB. Contrarily, the negative sign of the linear term X 3 indicates that the vesicle size decreases significantly with increasing hydration medium pH.
The increase of size with increasing drug:PL molar ratio could be credited to increasing the PL content of the vesicles. Similar results were reported for other vesicular systems. Dubey et al. [ 43 ] demonstrated increased vesicle size of ethosomes with increasing PL content. In another study, Ahmed and Badr-Eldin [ 44 ] reported an increase in avanafil invasome size with increasing PL content of the vesicles. Regarding the HLB of the surfactant, it was observed that a significant reduction of the vesicle size occurs as the HLB is decreased. This observation could be explained on the basis of the increased hydrophobicity of the surfactant with reduced HLB values. Increased surfactant hydrophobicity could lead to reduction of surface energy and low water uptake into the vesicle core, resulting in reduction of the vesicle size [ 38 , 45 , 46 , 47 ]. The boosted FLB permeation from optimized FLB TRF gel could be attributed to the synergistic advantages of TRFs and the nanosized system. The flexible and deformable structure of the TRF could impart the potential to pass easily through the mucosal barriers. Furthermore, the existence of surfactants which act as edge activators could contribute to the permeation-enhancing ability of TRF by disrupting the lipid bilayer of the membrane [ 46 ]. In addition, the nanoscale size of the vesicles results in a great surface area, thus increased contact area with the mucosal epithelium and successively improving the chance of drug permeation [ 38 ]. Nanovesicles have been reported to have the potential to enhance drug absorption through the nasal membrane barrier and to demonstrate a high efficacy in enhancing drug bioavailability [ 40 ]. However, mucociliary clearance can help to reduce the contact time of drug-loaded nanovesicles on the mucosal surface inside the nose. Thus, the application of hydrogel-specific properties is now considered to be a useful platform for the preparation of stabilized and smart nanoscopic vehicles for drug delivery purposes. In addition, the incorporation of transferosomes into the hydrogel network can offer remote-controlled applications and also improve characteristics such as mechanical strength [ 25 , 42 , 48 , 49 ]. The observed higher extent of absorption from optimized FLB TRF hydrogel compared to the raw FLB gel could be attributed to the drug’s improved solubility and permeability by loading on a hydrophobic carrier. Comparing the two intranasal hydrogels, the optimized FLB-TRF-loaded hydrogel shows significant increases in C max and AUC ( p < 0.05) for both plasma and brain compared to control, indicating higher bioavailability and enhanced brain delivery of the drug. This could be attributed to FLB movement from the nasal cavity along both the olfactory or trigeminal nerves to the parenchyma of the brain. FLB is delivered to the nerves in the cerebrum and pons and then disperses throughout the brain. The intracellular and extracellular pathways are the ways by which FLB brain dispersion occurs. For the intracellular mechanism, FLB is internalized by an olfactory neuron through endocytosis, trafficked within the cell to the neuron’s projection site, and then released by exocytosis. For the extracellular pathway, FLB crosses the nasal epithelium to the lamina propria and then is transported externally along the length of the neuronal axon that leads into the CNS, where FLB is distributed by fluid movement. The enhanced drug bioavailability could be ascribed to the improved permeation properties of TRFs owing to their flexible and ultra-deformable structure that enhances penetration across the mucosal barrier [ 50 ]. Furthermore, the elevated concentration of the drug in the brain highlights the capability of TRF to augment direct delivery of the drug to the brain through the nasal olfactory region and across the BBB. The nanoscale size of the vesicles might also yield a shielding effect for the drug, protecting it from fast excretion and metabolism and leading to improved CNS delivery [ 41 ]. | 5. Conclusions
TRF-loaded hydrogel has been investigated as a possible intranasal delivery system of FLB. Box–Behnken design was successfully applied for optimization of FLB TRFs with minimized vesicular size. The optimized FLB TRFs (1:1.12 drug:PL molar ratio, surfactant HLB of 2.3, and hydration medium pH of 7.2) were spherical, with a vesicle size of less than 100 nm. The optimized FLB-TRF-loaded hydrogel showed an enhanced ex vivo permeation profile through goat mucosa when compared to that of control FLB hydrogel. In vivo assessment in Wistar rats confirmed that the optimized hydrogel had higher bioavailability than the control and exhibited enhanced brain delivery. Based on these results, the proposed optimized FLB-TRF-loaded hydrogel could be considered a promising drug delivery system for nose-to-brain delivery of the drug. | Flibanserin (FLB) is a nonhormonal medicine approved by the Food and Drug Administration (FDA) to treat the hypoactive sexual appetite disorder in females. However, the peroral administration of the medicine is greatly affected by its poor bioavailability as a result of its extensive first-pass effect and poor solubility. Aiming at circumventing these drawbacks, this work involves the formulation of optimized FLB transfersome (TRF) loaded intranasal hydrogel. Box–Behnken design was utilized for the improvement of FLB TRFs with decreased size. The FLB-to-phospholipid molar ratio, the edge activator hydrophilic lipophilic balance, and the pH of the hydration medium all exhibited significant effects on the TRF size. The optimized/developed TRFs were unilamellar in shape. Hydroxypropyl methyl cellulose based hydrogel filled with the optimized FLB TRFs exhibited an improved ex vivo permeation when compared with the control FLB-loaded hydrogel. In addition, the optimized TRF-loaded hydrogel exhibited higher bioavailability and enhanced brain delivery relative to the control hydrogel following intranasal administration in Wistar rats. The results foreshadow the possible potential application of the proposed intranasal optimized FLB-TRF-loaded hydrogel to increase the bioavailability and nose-to-brain delivery of the drug. | Author Contributions
Conceptualization, O.A.A.A. and U.A.F.; methodology, S.M.B.-E.; software, H.M.A.; validation, Z.A.A., H.Z.A., and A.K.K.; formal analysis, G.C.; investigation, F.C.; resources, A.A.; data curation, R.A.A.-G. (Raniyah A. Al-Ghamdi); writing—original draft preparation, U.A.F.; writing—review and editing, S.M.B.-E., G.C. and F.C.; visualization, R.A.A.-G. (Rawan A. Al-Ghamdi); supervision, Z.A.A.; project administration, N.A.A.; funding acquisition, N.A.A. All authors have read and agreed to the published version of the manuscript.
Funding
This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, under grant No. RG-13–166–41. The authors, therefore, acknowledge with thanks the DSR for technical and financial support
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. | CC BY | no | 2024-01-16 23:35:06 | Nanomaterials (Basel). 2020 Jun 29; 10(7):1270 | oa_package/82/4e/PMC7408465.tar.gz |
|
PMC7615524 | 37727079 | Introduction
Randomized controlled trials (RCTs) are considered the gold standard for assessing the causal effect of an exposure on an outcome, but are vulnerable to bias due to missingness in the outcome—or “dropout.” The impact of dropout depends on the missingness mechanism and the analysis model ( Dziura et al., 2013 ; Little & Rubin, 2020 ; Rubin, 1976 ). Three missingness mechanisms can be distinguished: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR) ( Rubin, 1976 ). With MCAR, missingness is unrelated to any measured or unmeasured characteristics and the observed data are a representative subset of the full data. MAR means that the missingness can be explained by observed data and, with MNAR, missingness is a function of the unobserved data.
Two common methods of dealing with dropout are complete case analysis (CCA) and multiple imputation (MI). A CCA is the analysis model intended to be applied to the trial data at its outset, restricted only to individuals with observed outcomes. With MI, missing outcome values are repeatedly imputed conditional on the observed data, generating multiple complete datasets to which the analysis model is applied ( Buuren, 2018 ; Little & Rubin, 2020 ; Rubin, 1987 , 1996 ), with the resulting estimates subsequently pooled using Rubin’s rules ( Rubin, 1987 ). In practice, a CCA will be unbiased if dropout is MCAR or MAR, conditional on the analysis model covariates. MI will be unbiased if dropout is MCAR or MAR conditional on the analysis model and imputation model covariates, and if the imputation model is correctly specified. Generally, both will be biased when outcomes are MNAR ( Dziura et al., 2013 ; Hughes et al., 2019 ; Little & Rubin, 2020 ). In this article, we consider the case of an RCT with an incomplete continuous outcome, where we assume that the outcome is generated from covariates and treatment according to a linear model. In such an RCT, a CCA will only be biased if the dropout is related the outcome, conditional on the model covariates ( Carpenter & Smuk, 2021 ; Hughes et al., 2019 ; White & Carlin, 2010 ). As RCT analyses typically adjust for a number of baseline covariates, in this article, we primarily focus on (the bias of) the treatment effect, when estimated conditional on some baseline covariate.
In the presence of dropout, observed data generally cannot be used to establish if outcomes are MNAR or MAR given the model covariates. Whether outcomes are MCAR can be partially tested using Little’s MCAR test, which compares the multivariate distribution of observed variables of patients with observed outcomes to those with unobserved outcomes ( Little, 1988 ). Little’s MCAR test and related tests, however, rely on strong parametric assumptions, with a conclusion that hinges on the specification of some arbitrary P -value cutoff, which limits their practical value ( Li & Stuart, 2019 ). Currently, there is no established statistical test for distinguishing between MAR and MNAR missingness, and consequently, no simple way to determine whether the treatment effect estimate is likely to be biased.
Current guidance for assessing risk of bias due to dropout relies on checking if dropout is differential across trial arms, assessing the plausibility that dropout may be related to outcome (e.g., dropout due to lack of efficacy) ( Higgins et al., 2012 ; Sterne et al., 2019 ), and comparing the baseline covariate distribution across trial arms in patients who are still observed at the end of follow-up ( Groenwold et al., 2014 ). While both differential dropout across trial arms and different baseline covariate distributions across trial arms in the observed data can be caused by MNAR dropout, these markers may also result from MAR dropout. The European Medical Agency (EMA) and the National Research Council (NRC), recommend using MAR-appropriate methods for the primary analysis, followed by sensitivity analyses that weaken this assumption ( Clinical Trials o. H. M. D. iNRCUP, 2010 ; European Medicines Agency (EMA), 2020 ). These guidelines, however, are in practice implemented in a fraction of all trials, with on average only 6% (of N = 330 trials) describing the assumed missing data mechanism ( Hussain et al., 2017 ; Rombach et al., 2015 ), 9% ( N = 237) justifying their choice of main analysis ( Rombach et al., 2015 ), and 19% ( N = 849) reporting some kind of sensitivity analysis ( Bell et al., 2014 ; Hussain et al., 2017 ; Rombach et al., 2015 ; Wood et al., 2004 ; Zhang et al., 2017 ), which rarely involves relaxing the primary analysis assumptions ( Bell et al., 2014 ), and only 9% ( N = 200) discussing the risk of bias resulting from missing data ( Zhang et al., 2017 ). This discrepancy between recommended and implemented practice persists despite extensive literature on the subject and may be due to the relative complexity of such analyses.
In this paper, we propose using the differences between the observed variances of the outcome across the two arms of the trial to assess the risk of CCA estimator bias due to MNAR dropout. We show, using directed acyclic graphs (DAGs) and standard statistical theory, how MNAR may give rise to unequal outcome variances between the fully observed subjects in the two arms of the trial. We illustrate this method using individual-level data and summary-level data. Individual-level patient data were obtained from an RCT investigating the benefit of an acupuncture treatment policy for patients with chronic headaches (ISRCTN96537534) ( Vickers et al., 2004 ). The summary data application used published statistics from a cluster-randomized clinical trial, which investigated psychological outcomes following a nurse-led preventive psychological intervention for critically ill patients (POPPI, registration ISRCTN53448131). | Methods for Testing Differences in Variances
Various methods are available for testing and estimating the difference in variance between two groups, including Bartlett’s test ( Bartlett & Fowler, 1937 ), Levene’s test ( Levene at al., 1960 ), the Brown–Forsythe test ( Brown & Forsythe, 1974 ), the Breusch–Pagan test ( Breusch & Pagan, 1979 ), and the studentized Breusch–Pagan test ( Koenker, 1981 ). In this paper, we employ the latter, as it has a straightforward implementation that allows for conditioning on additional covariates, and is also robust against nonnormally distributed errors. This is particularly relevant, as, in practice, outcomes are unlikely to be strictly normally distributed.
The studentized Breusch–Pagan estimate is obtained as follows. First, the outcome, Y , is regressed on the treatment variable, X , and optional additional covariates, C , in an OLS regression:
The regression residuals are obtained and squared and, in a second auxiliary OLS regression, regressed on the treatment variable:
The variance difference estimate is given by and the test statistic is given by nR 2 , with n the sample size and R 2 the coefficient of determination, obtained from the second auxiliary regression. | Results
The simulation results are shown in Table 2 . We first consider the seven scenarios when there is no effect modification and the treatment effects are homogeneous (A–G). When dropout is MCAR (A), and when dropout is MAR conditional on treatment (B), some measured covariate (C), or both (D), the treatment effect estimates are unbiased, irrespective of conditioning on Y b , with, on average, a zero outcome variance difference across trial arms in the observed data at baseline and at follow-up We observe the same for scenario F, where dropout is MNAR dependent on U , but with U and X independent in the observed data. In scenario G, where dropout is MNAR dependent on U and X , with U and X not independent in the observed data, the CCA treatment effect estimate is biased, and the mean outcome variance difference across trial arms at follow-up is nonzero. When dropout is MNAR dependent on Y f (E), we observe a biased treatment effect estimate and nonzero outcome variance differences at both baseline and at follow-up. Conditioning on Y b results in an attenuated bias estimate and a smaller variance difference at follow-up When effect modification is present (EM), resulting in treatment effect heterogeneity, we observe a variance difference at follow-up, regardless of dropout mechanism. In contrast, we only observe a VD at baseline in scenario E, where dropout is MNAR dependent on outcome. In Online Appendix C.2 , a companion table ( Table C.1 ) is provided for Table 2 , with additional measures of simulation performance.
In summary, this simulation shows that a variance difference across trial arms, in the observed data, at baseline, indicates outcome-dependent MNAR dropout (scenario E), which may result in a biased CCA treatment effect estimate. If is zero but the variance difference across trial arms in the observed data at follow-up, is nonzero, then one explanation is that dropout occurs according to the MNAR missingness mechanism in scenario G, which may result in a biased treatment effect estimate. Alternatively, this could be explained by effect modification together with a missingness mechanism that is MCAR (scenario A), MAR (scenarios B, C, and D), or MNAR (scenario F), which will result in an unbiased treatment effect estimate. If both and are zero, then the missingness mechanism is MCAR (scenario A), MAR (scenarios B, C, and D), or MNAR (scenario F) with no effect modification and the CCA treatment effect estimate will be unbiased. Adjusting for the outcome at baseline will result in an attenuation of the CCA estimator bias and a smaller when dropout is MNAR as in scenarios E and G, irrespective of the presence of effect modification. | Discussion
In this paper, we show that, in RCTs, the outcome variance difference across trial arms at baseline, in the set of study participants who did not drop out, is an indicator of outcome-dependent MNAR dropout, and consequently, a biased CCA treatment effect estimate. In contrast, when outcomes are MAR or MCAR, this baseline variance difference will be zero. We also show that the outcome variance difference across trial arms in the observed data at follow-up can be an indicator of both outcome-dependent MNAR dropout and nonoutcome dependent MNAR dropout, both of which may result in a biased treatment effect estimate. A variance difference across trial arms at follow-up, in the observed data, however, can only be meaningfully interpreted when the outcome variances can be expected to be equal across trial arms in the full data. This requires two assumptions: first, that there is no treatment effect heterogeneity; second, that the errors of the outcome are homoskedastic.
Treatment effect heterogeneity can be thought of as nonrandom variability in treatment response that is attributable to patient characteristics. How plausible it is that heterogeneity is absent depends strongly on intervention type and study population. Efficacy trials, for example, typically have stricter inclusion criteria, resulting in more homogeneous study populations, and are less prone to large variations in treatment response. In contrast, pragmatic trials with broad eligibility criteria are more likely to have heterogeneous treatment effects ( Varadhan & Seeger, 2013 ). As treatment effect heterogeneity affects the outcome variance of the intervention arm ( Mills et al., 2021 ), it is a second potential cause of observed outcome variance differences across trial arms. The presence of effect modification can be investigated by performing a stratified analysis, where the effect modifier serves as a stratification variable ( Corraini et al., 2017 ). Alternatively, if the observed difference in outcome variance between trial arms is solely the result of effect modification, then conditioning on the effect modifier and the interaction term between effect modifier and trial arm can be expected to remove all evidence of a variance difference.
An outcome variance difference across trial arms may also result from heteroskedastic outcome errors. Specifically, if the variance of the outcome errors is related to the outcome and if treatment has a causal effect on the outcome, this may cause a difference in outcome variances across trial arms at follow-up. For example, suppose an outcome such as body mass index (BMI) has greater day-to-day variability in people with higher BMI. Then, if the treatment lowers the mean BMI in the intervention arm, this will result in a comparatively smaller intervention arm variance. A simple way to investigate this is to consider the outcome variable measured at baseline, group its values into bins, and establish if the outcome error variance is different in bins with higher and lower mean values.
We additionally propose employing the (conditional) outcome variance difference across trial arms in the observed data as an MNAR bias assessment tool, and, indirectly, as a model building tool, which can be used to assess the added value of including variables for explaining the missingness mechanism. This method is easily implemented, using existing tests available in standard software, and has a straightforward interpretation of results. In Section 7 , we demonstrated how outcome variance differences can be used to assess the risk of MNAR bias for various models, using both individual-level data from the acupuncture trial, and summary-level data from the POPPI trial.
The outcome variance difference across trial arms at baseline and at follow-up is suitable for assessing the risk of dropout bias for analysis models that are estimated with OLS linear regression, and assume that the outcome is continuous and given by a linear model. These methods cannot be used for noncontinuous outcomes, such as binary or time-to-event outcomes.
A second limitation of our proposed method is its comparatively modest power, with the power to detect an outcome variance difference lower than the power to detect a difference in outcome means. Brookes et al. (2004) showed that if a trial is powered to detect a mean difference of a given size, then in order to detect an interaction effect of the same magnitude, the sample size needs to be approximately four times larger. Mills et al. (2021) found comparable numbers for the power to detect a variance difference.
Instead of using our method as a strict significance test with a dichotomous conclusion, we recommend assessing the practical implications of the values inside the confidence interval ( Amrhein et al., 2019 ; Andrade, 2019 ; Wasserstein et al., 2019 ). For example, in the individual-level data application of Section 7 , at follow-up, the outcome variances in the observed data were 188.2 and 289.4, in the intervention usual care arm, respectively, with a variance difference of −100.3 (95% CI: −222.0, 21.4). At baseline, in the observed data, we estimated a variance difference of −81.46 between trial arms, with a 95% CI of −183.21–20.30. These results are compatible with a large negative outcome variance difference as well as a small positive outcome variance difference. These large negative variance difference would raise concerns of MNAR dropout, with the large negative outcome variance difference at baseline specifically suggestive of outcome-dependent dropout.
While a variance difference across trial arms in the observed data at baseline indicates outcome-dependent MNAR dropout, interpreting a variance difference at follow-up is less straightforward. For the latter, we suggest performing further analyses to identify the presence of heteroskedastic outcome errors and treatment effect heterogeneity, and using expert and contextual knowledge. If the presence of both heteroskedastic errors and treatment effect heterogeneity is judged to be unlikely, then the possibility of nonoutcome-dependent MNAR dropout should be investigated, for example, by conditioning on additional covariates in the analysis model or, if using MI, in the imputation model, and assessing the effect of this on the variance difference at follow-up. If the results suggest that dropout may be MNAR and that, consequently, the treatment effect estimate is at risk of bias, this motivates performing a sensitivity analysis to assess the robustness of the main analysis results under a plausible MNAR assumption. For example, we may observe a variance difference across trial arms in the observed data at baseline, suggesting outcome-dependent dropout, and also observe that the treatment effect estimate becomes smaller if we additionally condition on covariates that are correlated with the outcome. This would suggest outcome-dependent dropout that results in overestimation of the treatment effect, which may occur, for example, if, on average, more poor responders drop out. A natural subsequent step would involve performing a sensitivity analysis under the MNAR assumption of worse value dropout and investigating how strong this mechanism must be for the material conclusion to be affected. | Randomized controlled trials (RCTs) are vulnerable to bias from missing data. When outcomes are missing not at random (MNAR), estimates from complete case analysis (CCA) and multiple imputation (MI) may be biased. There is no statistical test for distinguishing between outcomes missing at random (MAR) and MNAR. Current strategies rely on comparing dropout proportions and covariate distributions, and using auxiliary information to assess the likelihood of dropout being associated with the outcome. We propose using the observed variance difference across trial arms as a tool for assessing the risk of dropout being MNAR in RCTs with continuous outcomes. In an RCT, at randomization, the distributions of all covariates should be equal in the populations randomized to the intervention and control arms. Under the assumption of homogeneous treatment effects and homoskedastic outcome errors, the variance of the outcome will also be equal in the two populations over the course of follow-up. We show that under MAR dropout, the observed outcome variances, conditional on the variables included in the model, are equal across trial arms, whereas MNAR dropout may result in unequal variances. Consequently, unequal observed conditional trial arm variances are an indicator of MNAR dropout and possible bias of the estimated treatment effect. Heterogeneous treatment effects or heteroskedastic outcome errors are another potential cause of observing different outcome variances. We show that for longitudinal data, we can isolate the effect of MNAR outcome-dependent dropout by considering the variance difference at baseline in the same set of patients who are observed at final follow-up. We illustrate our method in simulation for CCA and MI, and in applications using individual-level data and summary data. | Notation
Let U be some unmeasured covariate, and C some measured covariate. Let X = j denote the randomized trial arms, with j = {0, 1}, and X = 0 denoting the comparator arm and X = 1 the intervention arm. We define the continuous outcome variable, Y , as a linear function of X, U , and C so that with α some intercept, β the treatment effect, γ and δ the effects of U and C on Y , respectively, and ε Y the mean-zero error term, with ε Y independent of X, U , and C , and with U and C additionally independent of X .
Let μ 1 denote the mean of the outcome, Y , when X = 1, μ 0 the mean of Y when X = 0, with and
As C, U , and ε Y are independent of X so that, for example, E[ U | X = 1] = E[ U | X = 0], the mean difference across trial arms ( μ 1 − μ 0 ) reduces to β and we can write:
We use “full data” to refer to all data that would have been observed on all trial participants, had there been no dropout. “Observed data” refer to all data for the study participants who did not drop out. We define a response indicator R , with R = 1 when Y is observed, and R = 0 when Y is missing. Let Y * denote the outcome in the observed data and β * the treatment effect estimate in the observed data:
The bias, B , of the CCA treatment effect estimate is given by the difference of the population treatment effect in the full data ( β ) and in the observed data ( β *):
Optionally, the treatment effect in the observed data may be defined conditional on some observed covariate(s), C , so that and the bias, B C , is given by with estimated in an ordinary least squares (OLS) regression of the observed outcome, Y *, on X and C .
By definition, for the linear model in (1) , the population variance of Y , for a given trial arm, j , in the full data, is given by
With U and C independent of ε Y , all covariance terms involving ε Y are 0, and (8) reduces to
Let VD denote the outcome variance difference across trial arms in the full data: With Y generated according to (1) , U, C , and ε Y are independent of X so that, for example, var( U | X = 1) = var( U | X = 0), resulting in an expected outcome variance difference of 0 in the full data. The assumptions necessary for this to hold are discussed further in Section 3 .
In the observed data, the outcome variance in a given trial arm, j , is given by and the variance difference across trial arms by The exact form of (11) is determined by the relationship between the covariates, outcome, and R , and is explored in Section 3 . Again, the variance may be defined conditional on C , with the variance difference in the full data then given by and, in the observed data, by We can estimate (13) using the studentized Breusch–Pagan estimator, detailed further in Section 4 . In the next section ( Section 3 ), we show under which dropout mechanisms the variance difference across trial arms in the observed data can be expected to be different from 0.
Mar and Mnar Dropout and Outcome Variances Across Trial Arms in the Observed Data
In an RCT, patients are randomized to treatment, after which treatment is initiated and the patients are followed up over a period of time, during which dropout may occur. Randomization makes it plausible to assume that the trial arms have equal outcome variances prior to treatment initiation in the full data. Given two additional assumptions, we can expect this to hold after treatment initiation and throughout follow-up:
Assumption A1
There is no treatment effect heterogeneity so that the treatment effect, β , is the same for every individual.
Assumption A2
The errors are homoskedastic so that the error term of the outcome (1) , ε Y , does not depend on treatment or on Y itself.
If these two assumptions hold, then the trial arm population outcome variances can be expected to remain the same throughout follow-up in the full data. Then, it follows that if dropout is present and the trial arm outcome variances are different in the observed data, this must be due to dropout. Here, we use directed acyclic graphs (DAGs) to describe different MAR and MNAR dropout mechanisms, and show, using graphical model theory ( Mohan & Pearl, 2021 ), that the trial arm variances, conditional on the model covariates, are the same when dropout is MAR, but may be different when dropout is MNAR. Additionally, we show, for an outcome, Y ( Equation (1) ), generated according to a linear model, that certain types of MNAR dropout do not result in a biased treatment effect estimate or different trial arm outcome variances. We define bias, B C (7) with respect to a treatment effect, (6) , estimated while adjusting for some observed baseline covariate, C , which is a predictor of Y ( Equation (1) ). Analogously, we define the variance difference as the difference in trial arm outcome variances, conditional on C , denoted as VD C (12) and (13) in the full and observed data, respectively. Let P( Y | X , C ) denote the density of Y , conditional on X and some observed baseline covariate, C , in the full data, and P( Y | X , C,R = 1) the corresponding density in the observed data.
Proposition 1
The densities P( Y | X , C ) and P( Y | X , C, R ) will be identical only when dropout is MCAR or MAR, with R independent of Y given the variables included in the analysis model ( R ⫫ Y|X, C). Any quantities derived from the densities, such as the mean difference and variance difference across X, will be also the same. If assumptions A1 and A2 are satisfied so that the variances of the outcome in the two trial arms are equal in the full data, then P( Y | X, C )=P( Y | X, C, R ) implies that the variances of the outcome in the two trial arms are also equal in the observed data.
In Figure 1(a) , dropout is MCAR, with the response indicator, R , unaffected by any observed or unobserved variables. Figures 1(b)–(d) depict MAR dropout mechanisms, with dropout dependent on treatment, X , on some baseline covariate, C , and on both X and C , respectively. In all four scenarios, R is independent of Y given C and X , and we can show that the density of Y , conditional on X and C , in the full data, is equal to the corresponding density of Y in the observed data: P( Y | X, C ) = P( Y | X, C,R = 1) (proofs given in Online Appendix A.1 ). This has the following implications. First, the outcome mean difference across trial arms conditional on C , (6) , will be the same in the full and observed data so that the CCA treatment effect estimate is unbiased, with B C = 0. Second, if the outcome variances are equal in X = 1 and X = 0 in the full data, then the outcome variances in the observed data can also be expected to be equal across trial arms. The latter requires that assumptions A1 and A2 are satisfied so that that the treatment effects are homogeneous and the outcome errors homoskedastic.
Figures 1(e)–(g) depict MNAR dropout mechanisms, with dropout dependent on outcome, Y , on some unobserved covariate, U , and on treatment, X , and U both. Then, for all three scenarios, we can show that R is not independent of Y given the covariates included in the analysis model, and consequently, that P( Y | X, C ) ≠ P( Y | X, C, R = 1) (proofs given in Online Appendix A.1 ). Different densities in the observed and full data imply that the outcome means and variances can be different also so that the CCA treatment effect estimate is biased and the outcome variance difference across trial arms nonzero. However, as we assume that Y ( Equation (1) ) is generated according to a linear model, an exception to this rule arises for MNAR dropout that occurs according to scenario of Figure 1(f) .
Proposition 2
Let the outcome, Y, be defined as in (1) . If dropout depends on unmeasured covariate, U, and if(U ⫫ X )|R, then the outcome mean difference and variance difference across trial arms in the full data can be estimated from the observed data, without conditioning on U. If assumptions A1 and A2 are satisfied so that the variances of the outcome in the two trial arms are equal in the full data, this implies that the variances of the outcome in the two trial arms are also equal in the observed data.
For all scenarios in Figure 1 , we assume that the measured and unmeasured covariates are independent of treatment ( C ⫫ X and U ⫫ X ), which can be expected to hold in a randomized trial setting. Under the additional assumption of homoskedastic errors ( assumption A2 ), ε Y ⫫ X . Then, the treatment effect estimate, β , can be estimated by the unconditional mean difference across trial arms, as in (2) . If we also assume that the treatment effects are homogeneous (A1) , all the variance components in (9) can be expected to be equal across trial arms (e.g., var( U | X = 0) = var( U | X = 1)) so that outcome variance difference across trial arms in the full data, VD (10) , is 0. If these independencies also hold in the observed data ( U ⫫ X | R, C ⫫ X | R , and ε Y ⫫ X | R ), then the unconditional treatment effect estimate in the observed data, β * (4) , is equal to β (2) so that the bias of the CCA treatment effect estimate, B = 0 (5) , and the outcome variance difference across trial arms, in the observed data, VD* (11) , is also 0.
In Figure 1(f) , dropout depends on unmeasured covariate, U . While this is an MNAR dropout mechanism, it results in an unbiased CCA treatment effect estimate and no outcome variance difference across trial arms, as U is independent of X in the observed data: U ⫫ X | R (proof given in Online Appendix A.2 ). While the same reasoning could be applied to Figure 1(g) , where dropout is MNAR dependent on X and U , this, however, would require the additional assumption that the effects of X and U on R are independent (e.g., sicker people drop out but equally so in both trial arms). For the purposes of this paper, we do not make this assumption, and allow X and U to interact (e.g., sicker people drop out but more so in the comparator arm). Then, in the observed data, X and U are no longer independent Consequently, β * will be biased ( B ≠ 0), and VD* ≠ 0. Formulae for β *, B , and VD* are given in Online Appendix A.2 (Equations (A.13), (A.14), and (A.16) , respectively). Formulae for the bias and outcome variance difference when conditioning on C , (6) and (13) are given in Equations (A.15) and A.17) . Note that if U and C are related, this will additionally mean that also Conditioning on C will result in attenuated estimates of the CCA estimator bias, B C , and the outcome variance difference across trial arms in the observed data, when compared to B and VD*, respectively. When C and U are independent, however, conditioning on C will leave both estimates unaffected.
In summary, when dropout is dependent only on some covariate that is either unobserved or excluded from the model, this will not, for a linear model of Y , result in a biased CCA treatment effect estimate or in an outcome variance difference across trial arms in the observed data, even though such dropout is strictly speaking MNAR. When dropout depends on both some unmeasured covariate and X , and they are not independent in the observed data, this may result in bias and a variance difference. Table 1 provides an overview of when the seven dropout scenarios of Figure 1 result in a biased CCA estimate and an outcome variance difference across trial arms in the observed data, for a linear regression of Y on X and C .
Note that the seven dropout scenarios of Figure 1 and Table 1 are illustrative settings and that we do not provide a comprehensive review of all possible settings. For example, dropout may simultaneously depend on treatment, X (scenario B), some observed covariate, C , (scenario C), and on the outcome, Y (scenario E). If part of the dropout mechanism depends on Y (scenario E) or both X and U (scenario G), this is generally sufficient to cause a biased CCA estimate and an outcome variance difference across trial arms in the observed data. Under assumptions A1 and A2 , an outcome variance difference across trial arms serves as a marker of MNAR dropout that may result in bias. However, such MNAR dropout will not always result in an outcome variance difference, which may, because of several biases acting0. in different directions, be very small or 0. For example, dropout may depend on the outcome in such a way that the top quartile of the intervention arm and bottom quartile of the control arm drop out. Such a setting would result in a biased CCA estimate but no outcome variance difference across trial arms.
Mnar Dropout and Heterogeneous Treatment Effects in Longitudinal Data
In Section 3 , we showed that outcome-dependent MNAR dropout and MNAR dropout dependent on X and U , with U some unmeasured predictor of Y , can result in a biased treatment effect estimate and an outcome variance difference across trial arms in the observed data, whereas MAR dropout will result in neither, subject to assumptions A1 and A2 . When A1 and A2 hold so that the treatment effects are homogeneous and the outcome errors are homoskedastic, the expected variance difference across trial arms in the full data is 0, which implies that a variance difference across trial arms in the observed data can be used as a marker of MNAR dropout and bias. Treatment effect heterogeneity and heteroskedastic outcome errors, however, will result in a nonzero variance difference across trial arms in both the full and observed data so that MNAR dropout is no longer the only potential cause of a variance difference. Treatment effect heterogeneity can be investigated by checking for the presence of an effect modifier, for example, by performing a stratified analysis. Heteroskedastic outcome errors can be investigated by exploring if the variability in the outcome at baseline is different for patients with lower and higher values. We elaborate on this in the applied example of Section 7.1 .
Here, we examine the implications of violating assumption A1 through the introduction of effect modification, which will result in a nonzero expected outcome variance difference across trial arms in the full data and in the observed data (i.e., patients with outcomes at follow-up observed). We show that when this assumption is violated, for longitudinal data, where the outcome is measured in a time series, the presence of MNAR dropout can still be assessed by looking at the outcome variance difference across trial arms in the outcome measured at the baseline.
Outcome variances across trial arms when assumption A1 is violated
Let Y b denote the outcome measured at baseline, prior to treatment initiation, which is a function of covariates U and C : with ε b the error term. Y b is unaffected by X , and with U and C independent of ε b , the baseline outcome variance for a given trial arm, j , is and the outcome variance difference across trial arms at baseline is 0: Let Y f denote the outcome at follow-up, which is a function of the baseline outcome, Y b , intervention, X , covariates U and C , and effect modifier, S : with ε f the error term, which is correlated with ε b . In (17) , S modifies the effect of X on Y f , with ζ the effect of S on the outcome at follow-up, Y f , in the intervention arm, and the average treatment effect, β av , given by
For simplicity, in (17) , we do not specify a main effect of S , and assume that the effect modification is limited to the intervention arm. More generally, an effect modifier can be expected to modify the outcome in both treatment arms. As for (1) , we here assume that X, U , and C are independent of ε f , and that U and C are independent of X ,,but allow U and C to be dependent. We make the same assumption for S , but now assume, for simplicity, that S is independent of U and C . The population variance of Y f in the full data, for the comparator arm, is then given by and, for the intervention arm, by
With U, C, ε b , and ε f independent of X , (19) and (20) result in a variance difference in the outcome at follow-up across trial arms in the full data of
When ζ ≠ 0 in (17) so that S acts as an effect modifier, assumption A1 is violated and the outcome variance difference across trial arms at follow-up (21) is nonzero. As S ⫫ C , the covariate-adjusted variance difference, VD f (C) =VD f . The derivations of (19) – (21) are given in full in Online Appendix B.1 . Note that if we allow for a dependency between S, U , and C , (20) and (21) will include additional covariance terms, with, for example, for S and U , the term 2 ζ ( γ + γ b )cov( S , U | X = 1) (see Online Appendix B.1, Equations (B.7) and (B.8) ). Also note that while we omit a main effect of S in (17) , including it will not affect the expected bias or variance difference, as S ⫫ X .
In the presence of heterogeneous treatment effects, the outcome variance difference across trial arms in the full data, VD f (21) , is nonzero. Then, the outcome variance difference across trial arms in the observed data, will also be nonzero, irrespective of the dropout mechanism, and consequently, cannot be used to assess the risk of MNAR dropout. In contrast, treatment effect heterogeneity will not result in an outcome variance difference at baseline, in either the full data (VD b , 16) or the observed data as Y b is not affected by treatment. Outcome-dependent dropout, however, may result in an outcome variance difference at baseline in the observed data, as the outcome errors at baseline, ε b , and at follow-up, ε f , are correlated. Consequently, unlike can be used to assess the risk of MNAR dropout and, by extension, CCA estimator bias, when assumption A1 is violated. Similarly, if assumption A2 does not hold, for example, when the error term depends on the outcome, this will, given that a treatment effect is present, result in a variance difference across trial arms in the full data only in the outcome at follow-up and not at baseline (derivations are given in Online Appendix B.2 ). In the simulation below, we explore the implications of violating assumption A1 and show that, for longitudinal data, the outcome variance difference across trial arms at baseline, in the observed data, can be used to distinguish between treatment effect heterogeneity and outcome-dependent dropout.
Methods
We performed a simulation study with the outcome at follow-up, Y f , simulated according to (17) , the outcome at baseline, Y b , simulated according to (14) , and with 1000 patients randomly assigned to each trial arm. The errors of Y b and Y f , ε b and ε f , were drawn from a multivariate normal distribution, with variances of 1.5 and 2, respectively, and a correlation coefficient of 0.433. Dropout was simulated according to the seven mechanisms listed in Table 1 , 2 and illustrated in Figure 1 , under a logit model, with 28% overall dropout. Each scenario was simulated without effect modification ( ζ = 0), with a true treatment effect β = 1 in the full data, and also with effect modification ( ζ = 0.5), with the average treatment effect in the full data, β av (18) , also 1.
The outcome variance difference across trial arms in the observed data was calculated at final follow-up and, in the same set of patients (i.e., patients with observed outcomes at follow-up), at baseline CCA estimator bias and VD estimates were obtained conditional on observed baseline covariate, C , and when additionally adjusting for For each scenario, we obtained mean estimates of the CCA estimator bias, and with corresponding 95% confidence intervals (CIs). The 95% CIs were computed using the standard deviation (SD) of the relevant estimate across simulations. We simulated 1000 datasets of N = 2000, having verified, for each estimate, that the Monte Carlo SD (MCSD) and mean standard error were comparable, indicating that 1000 repetitions are sufficient. Additionally, we calculated the proportion of times the null was excluded from the confidence interval, as an indicator of how often a variance difference was correctly identified across simulations. A full description of the simulation framework, in accordance with ADEMP guidelines ( Morris et al., 2019 ), is given in Online Appendix C.1 , where we describe, in detail, the simulation a ims, d ata-generating mechanisms, m ethods, and p erformance measures.
Using Conditional Trial Arm Outcome Variances to Evaluate Imputation Models
In this section, we consider the situation where there are measured covariates that are predictive of dropout and outcome, which are not included in the analysis model. In Section 3 , we showed that for an MAR dropout mechanism, the outcome variances in the observed data are equal across trial arms, when conditioning on all variables that affect missingness. Additionally, we showed that this also holds when dropout is MNAR dependent on some unobserved variable, given that this variable is independent of treatment in the observed data. This property, in conjunction with the assumption of homogeneous treatment effects ( assumption A1 ) and homoskedastic outcome errors ( assumption A2 ), can be used to assess the plausibility of bias in a CCA analysis, by comparing the outcome variances across trial arms while conditioning on all analysis model variables. When data are missing, however, investigators may choose to use an MI approach, defining an imputation model that includes auxiliary variables that are not included in the main analysis model. In an MI model, assuming that dropout is MAR conditional on the imputation model variables and that the imputation model is correctly specified, we would expect the variance difference to be zero across the imputed datasets.
In this simulation study, we show that when dropout depends on some covariate, C 2 , and X , and C 2 is excluded from the analysis model, the CCA treatment effect estimate is biased and there is an outcome variance difference across trial arms in the observed data. If C 2 is included in the imputation model, however, fitting the same analysis model to the imputed datasets will result in an unbiased estimate of the treatment effect and no variance difference. Consequently, the outcome variance difference across trial arms in the imputed data can be used to assess the added value of including auxiliary variables in the imputation model.
Methods
We performed a simulation study with the outcome, Y , defined according to a linear model: Y = α + βX + γ 1 C 1 + γ 2 C 2 + ε Y , with C 1 and C 2 two observed independent continuous covariates, and the remaining terms defined as in (1) ( Section 2 ). A total of N = 1000 and N = 10,000 patients were randomized to treatment, with a true treatment effect β = 1, and trial arm outcome variances of 8. Dropout was simulated according to the two dropout mechanisms shown as DAGs in Figure 2 , defined in the same manner as in Figure 1 , with, for example, in DAG 1, Y affected by treatment, X , covariates C 1 and C 2 , and outcomes MAR conditional on X and C 2 . Dropout was simulated under a logit mechanism, with 28% overall dropout.
In the observed data, we performed a CCA linear regression conditional on C 1 . Missing outcomes were imputed conditional on C 1 , C 2 , X , and Y *, generating 10 complete datasets. Treatment effect estimates adjusted for C 1 were obtained for each dataset and subsequently pooled using Rubin’s rules ( Rubin, 1996 ). The corresponding variance differences for both models were estimated conditional on C 1 . The 95% CIs were computed using the SD of the relevant estimate across simulations. We simulated 1000 datasets for each scenario, having verified, for each estimate, that the Monte Carlo SD and mean standard error were comparable, indicating that 1000 repetitions are sufficient. A more detailed description of the simulation framework, in accordance with ADEMP guidelines ( Morris et al., 2019 ), is given in Online Appendix D.1 .
Results
The simulation results for N = 1000 are shown in Table 3 . In scenario 1 ( Figure 2a ), dropout depends on X and C 2 , and, consequently, is MNAR conditional on analysis model covariates C 1 and X , resulting in a biased CCA treatment effect estimate, with the corresponding outcome variance difference across trial arms nonzero. The dropout, however, is MAR conditional on X and C 2 , and fitting the same analysis model, which regresses Y on X and C 1 , to data imputed conditional on Y*, X, C 1 , and C 2 , results in an unbiased treatment effect estimate and a nearzero variance difference. In scenario 2 ( Figure 2b ), dropout is a function of Y , in addition to C 2 and X , and consequently, MNAR conditional on the analysis model covariates and also MNAR conditional on the imputation model covariates. This results in a biased CCA estimate in the observed data and a biased treatment effect estimate in the imputed data, with, for both, nonzero outcome variance differences across trial arms. In Online Appendix D.2 , a companion table is provided for Table 3 ( Table D.1 ), which includes results for a sample size of N = 10,000 and various measures of simulation performance.
Based on the outcome variance difference in the observed and imputed data, we would conclude, for scenario 1, that including variable C 2 in the imputation model will result in a less biased estimate, while, for scenario 2, we would infer that the imputation model fails to address the dropout mechanism, suggesting that the data are MNAR given the variables in the imputation model.
Previously, in Section 3 , we showed that if dropout depends on some covariate and X , and the two are not independent in the observed data, including the covariate in the analysis model will result in an unbiased CCA estimate and no outcome variance difference across trial arms in the observed data. Here, we show, when using MI, that if this covariate is omitted from the analysis model but included in the imputation model, the resulting treatment effect estimate will be unbiased and the outcome variance difference will be zero. Consequently, the outcome variance difference across trial arms in the imputed data can be used to assess the added value of including variables in the imputation model for explaining the missingness mechanism.
Application
An application using individual-level data from the acupuncture trial
We now apply our method to individual-level data from an RCT, which compared the effect of two treatments on 401 patients suffering from chronic headaches ( Vickers et al., 2004 ; Vickers, 2006 ). The primary outcome was the headache score at 12 months, with higher values indicating worse symptoms. Patients were randomly allocated to acupuncture intervention ( N = 205) or usual care ( N = 196). The trial found a beneficial effect of acupuncture treatment, with a mean difference in headache score of −4.6 (95% CI: −7.0, −2.2), adjusted for baseline headache score and minimization variables age, sex, headache type, number of years of headache disorder, and site (general practices in England and Wales). At 12 months, 21% and 29% of patients in the acupuncture and usual care arm, respectively, had dropped out. The investigators noted that while dropouts were generally comparable across the two arms, their baseline headache score was on average higher.
Existing methods for assessing risk of bias due to dropout include checking if dropout is differential across trial arms ( Higgins et al., 2012 ; Sterne et al., 2019 ), and if baseline covariate distributions are different across trial arms in patients who are still observed at the end of follow-up ( Groenwold et al., 2014 ). We assessed the relationship between trial arm and dropout by performing a logistic regression of the dropout indicator on treatment, which yielded an association of 0.38 (95% CI: −0.07, 0.84), with the CI just including the null. Note, however, that biased and unbiased treatment effect estimates can be obtained both when dropout is balanced and differential ( Bell et al., 2013 ). We compared the baseline covariate distributions across trial arms by performing linear regressions of each covariate included in the primary analysis model on treatment, using the subset of patients still observed at 12 months, with the covariates standardized to facilitate comparisons. The largest point estimate and narrowest confidence interval were observed for the headache score at baseline (0.14, 95% CI: −0.09, 0.37), though the latter also included the null ( Table 4 ). As for differential dropout, both MAR and MNAR dropout mechanisms can result in different baseline covariate distributions across trial arms, and consequently, neither method is a unique marker for the presence of MNAR dropout.
Using the outcome variance differences across trial arms at 12 months (VD 12 ) and at baseline (VD b ), we assessed the risk of bias due to MNAR dropout for an unadjusted CCA model (M1), a model adjusted for the minimization variables (M2), and a model adjusted for the minimization variables and baseline headache score (M3). VD 12 and VD b were estimated using the studentized Breusch–Pagan test, for the subset of patients still observed at 12 months. Results are reported in Table 5 .
The unadjusted model (M1), regressing headache score on treatment, showed a beneficial effect of acupuncture therapy (−6.1, 95% CI: −9.6, −2.6). At 12 months, the outcome variances in the acupuncture arm and usual care arm were 188.2 and 289.4, respectively, with a variance difference, VD 12 =−100.3 (95% CI: −222.0, 21.4). This result is compatible with a large negative variance difference (−222.0), with a smaller outcome variance in the acupuncture arm, and a small positive variance difference (21.4), with a smaller outcome variance in the usual care arm. At baseline, we estimated an outcome variance difference of VD b = −81.5 (95% CI: −183.2, 20.3), which is once again compatible with a large negative variance difference and a small positive variance difference. A substantial outcome variance difference at baseline raises concerns of outcome-dependent MNAR dropout, whereas, at follow-up, an outcome variance difference may have multiple causes: MNAR dropout, heteroskedastic errors, or treatment effect heterogeneity resulting from effect modification. Adjusting for the five minimization variables (M2) did not affect the estimated treatment effect or VD 12 , whereas additionally adjusting for baseline headache score (M3) resulted in an attenuated treatment effect of −4.64 (95% CI: −7.08, −2.19) and a greatly reduced positive VD 12 of 21.23 (95% CI: −26.83, 69.30), with much tighter confidence bounds. In Section 5.3 , we showed that when dropout is MNAR, conditioning on the outcome at baseline results in a smaller outcome variance difference at follow-up and attenuation of the CCA estimator bias.
However, in the event that the outcome variance difference at follow-up is the result of treatment effect heterogeneity, conditioning on an effect modifier will also result in a decreased variance difference ( Mills et al., 2021 ). A simple way to check if a variable is an effect modifier is to perform a stratified analysis. We repeated the regression of M3 ( Table 5 ) in patients with baseline headache scores below the mean, and in patients with scores above the mean, yielding estimates of −2.92 (95% CI: −5.15, −0.69) and −6.66 (95% CI: −12.49, −0.83), respectively. The difference in treatment effect estimate in patients with lower and higher baseline scores suggests that the headache score at baseline may act as an effect modifier. For comparison, we performed an analogous analysis dividing the patients according to age, which showed no difference in treatment effect estimate between patients below mean age (−4.76; 95% CI: −8.89, −0.63) and above mean age (−4.57; 95% CI: −7.73, −1.40). This suggests that the baseline headache score may be acting as an effect modifier, which would imply that the observed outcome variance difference at 12 months may at least in part be the result of effect modification in the intervention arm.
An outcome variance difference at follow-up may also result from the presence of heteroskedastic outcome errors. This can be assessed by checking if variability in the outcome at baseline is different for patients with lower and higher values. We did this by ordering the baseline headache score values and dividing them into six bins. The variances across bins showed no evidence against homoskedastic outcome errors, with no corresponding increase or decrease in variance observed ( Table E.1, Online Appendix E ).
In summary, our results suggest that the CCA estimate, adjusted for minimization variables and baseline headache score may be biased due to outcome-dependent dropout, and that the true treatment effect may be more modest. The magnitude of this bias is, however, likely partly reduced by conditioning on the baseline headache score. The bias can be expected to be further attenuated when conditioning on additional variables that are predictors of the outcome, either by including them in the main analysis model or, if using an MI approach, in the imputation model. In the original trial publication ( Vickers et al., 2004 ), the authors additionally obtained the treatment effect estimate when imputing the 12-month dropouts using auxiliary variables that were highly correlated with the headache score at 12 months, including the headache score measured at a previous time point (3 months) and a post-hoc global assessment of headache severity. This yielded a smaller treatment effect estimate of −3.9 (95% CI: −6.4, −1.4), when compared to the CCA estimate adjusted for minimization variables and baseline headache score (−4.6; 95% CI: −7.1, −2.2). This attenuation on imputing the missing 12-month outcomes using variables highly correlated with the outcome further supports our conclusion that the CCA treatment effect estimate is likely affected by outcome-dependent dropout. This can be further investigated by performing sensitivity analyses under assumption of a dropout mechanisms that is MNAR dependent on the outcome.
Note that observing a variance difference at baseline is sufficient to raise concern of outcome-dependent dropout. Further investigating the variance difference at follow-up, as we do here, is then not strictly necessary, though it may nevertheless be interesting for interpretation purposes. Investigating the possibility of heterogeneous treatment effects and heteroscedastic errors, however, becomes necessary when there is a non-outcome dependent MNAR dropout mechanism, which will only result in an outcome variance difference across trial arms at follow-up but not at baseline.
An application using summary-level data from the POPPI trial
The POPPI trial investigated whether a preventive, complex psychological intervention, initiated in the intensive care unit (ICU), would reduce the development of subsequent posttraumatic stress disorder (PTSD) symptoms at 6 months in ICU patients ( Wade et al., 2019 ). Symptom severity was quantified using the PTSD symptom scale-self-report (PSS-SR) questionnaire, with higher values indicating greater severity. Twenty-four ICUs were randomized to intervention or control, with intervention ICUs providing usual care during a baseline period and the preventive intervention during the intervention period, and control ICUs providing usual care throughout. At 6 months follow-up, 79.3% of patients had completed the PSS-SR questionnaire, with no difference across study arms. The trial found no beneficial effect of intervention, with a mean difference in PSS-SR score of −0.03 (95% CI: −2.58, 2.52), adjusted for age, sex, race/ethnicity, deprivation, preexisting anxiety/depression, planned admission following elective surgery, and the Intensive Care National Audit & Research Centre (ICNARC) Physiology Score.
Using summary statistics from the published study, we performed a t -test for the variance difference ( Mills et al., 2021 ) between trial arms at 6 months, to assess if the study’s reported null result may have been biased by MNAR dropout. Published estimates were means with 95% CIs, adjusted for the previously listed variables, which we used to calculate the outcome variances and corresponding variance differences). We found no evidence for a variance difference across trial arms at baseline (VID b , = 11.2; 95% CI: −22.7, 45.2), but a greater variance in the intervention arm at 6 months (VD 6 = 52.5; 95% CI: 18.8, 86.2).
No variance difference across trial arms in the outcome at baseline indicates that dropout is not outcome-dependent, whereas the variance difference at follow-up may be the result of treatment effect heterogeneity, heteroskedastic outcome errors, or MNAR dropout that does not depend on outcome, as in scenario of Figure 1(g) , where dropout depends on some unobserved covariate, U , and treatment. For nonoutcome-dependent MNAR dropout to result in bias and a variance difference, this requires U to interact with treatment in the dropout mechanism, which will result in differential dropout across trial arms. In the POPPI trial data, however, dropout was balanced at 6 months follow-up, suggesting that nonoutcome-dependent MNAR dropout is unlikely to be the cause of the observed outcome variance difference across trial arms at follow-up. An outcome variance difference at follow-up may also be the result of heteroskedastic outcome errors. This, however, requires that a treatment effect be present, whereas, here, a null treatment effect was estimated. In summary, our results suggest that there is no MNAR dropout in the POPPI trial at 6 months follow-up, but that there may be treatment effect heterogeneity, which, on average, results in a null treatment effect. Further investigation into potential effect modifiers would require access to individual-level data.
Supplementary Material | Acknowledgments
We would like to thank Dr Andrew Vickers and the acupuncture trial team for making the data from “Acupuncture for chronic headache in primary care: large, pragmatic, randomized trial” publicly available. AH, TP and KT were supported by the Integrative Epidemiology Unit, which receives funding from the UK Medical Research Council and the University of Bristol (MC_UU_00011/3). KHW is affiliated to the Integrative Cancer Epidemiology Programme (ICEP), and works within the Medical Research Council Integrative Epidemiology Unit.
Funding information
University of Bristol; Medical Research Council, Grant/Award Number: MC_UU_00011/3
Data Availability Statement
Individual-level patient data from the acupuncture trial (ISRCTN96537534) are publicly available and can be found in the supplementary materials of Vickers 2006 . This trial was approved by the South West Multicentre Research Ethics Committee and appropriate local ethics committees. | CC BY | no | 2024-01-16 23:47:20 | Biom J. 2023 Dec 1; 65(8):e2200116 | oa_package/98/16/PMC7615524.tar.gz |
|
PMC7615526 | 38160938 | Introduction
Lysosomal acid lipase (LAL) hydrolyzes cholesteryl esters (CE) and triacylglycerols (TG) at acidic pH [ 1 , 2 ]. Congenital mutations of the LAL-encoding LIPA gene cause a substantial reduction in enzyme activity, resulting in a massive accumulation of neutral lipids in lysosomes [ 3 , 4 ], ultimately leading to cellular dysfunction and damage of various cells and organs. The main organs affected by LAL deficiency (LAL-D) are the liver, spleen, adrenal glands, small intestine, and the vasculature [ 5 ].
The infantile form of LAL-D, formerly known as Wolman disease, is a severe and rapidly progressive condition that develops within the first few weeks of life. In these patients, LAL activity is 1–2% of mean normal levels, and patients die at a median age of 3.7 months [ 6 ]. The disease manifests with nausea, vomiting, intestinal malabsorption, impaired growth and development, and severe liver damage [ 7 ], rapidly progressing to fibrosis and cirrhosis [ 8 ]. In contrast, late-onset LAL-D, formerly known as CE storage disease, occurs between 2 and 60 years of age and may be latent and asymptomatic; thus, it is often not diagnosed until a routine physical examination and biochemical blood tests are performed. Symptoms include liver damage such as hepatomegaly, elevated alanine and aspartate aminotransferase, steatosis, liver fibrosis, and cirrhosis. In addition, splenomegaly and concomitant pathologies such as anemia and thrombocytopenia may be observed, as well as early development of atherosclerosis [ 5 , 9 ].
In contrast to humans with complete loss of LAL, Lal-deficient (Lal−/−) mice [ 10 ] are viable with a median lifespan of almost one year. Similar to LAL-D patients, they exhibit ectopic CE and TG accumulation, particularly affecting the liver, spleen, adrenal glands, and small intestine [ 11 ]. Moreover, the animals suffer from growth retardation and progressive loss of white and brown adipose tissue [ 10 ]. Lal−/− mice represent a genetic model for early-onset LAL-D but phenotypically and histopathologically more closely resemble patients suffering from late-onset LAL-D [ 12 , 13 ].
Reduced intestinal lipid absorption [ 14 , 15 ], decreased plasma leptin concentrations, and lipodystrophy lead to impaired lipid metabolism in Lal−/− mice, resulting in reduced circulating TG levels in the fasted state [ 16 ]. In addition, TG and CE cannot be hydrolyzed in the absence of LAL and remain entrapped in the lysosomes of the Lal−/− liver. To adapt to the resulting reduced availability of fatty acids (FA), glucose consumption is increased, leading to a decrease in plasma and liver glucose levels as well as hepatic glycogen content [ 16 ].
The SM is a remarkably malleable organ that undergoes substantial remodeling in response to a wide range of different stimuli such as nutrient content or physical activity [ 17 – 19 ]. Recent studies have suggested that lysosomes play a critical role in maintaining SM mass, particularly as an intracellular signaling hub for activation of the mechanistic target of rapamycin complex 1 (mTORC1) and transcription factor EB (TFEB)/signaling in conjunction with lysosome biogenesis in regulating mTORC1-mediated protein synthesis [ 20 ]. However, how impaired lipid processing in lysosomes affects SM has not yet been well described. As SM is not able to store excess lipids like adipose tissue, FA that enter the SM are either oxidized for energy production or used for other metabolic processes. For this reason, lipid degradation by autophagy may be difficult to detect under these conditions [ 21 ]. However, FA exported from the lysosome are known to be intensively utilized by the SM for mitochondrial uptake, oxidation, and ATP synthesis to maintain cellular bioenergetics during stress such as physical activity [ 22 ].
Previous reports suggested decreased SM activity [ 23 , 24 ] and mass [ 25 – 28 ] in LAL-D patients, but the role of LAL in the metabolic balance of SM has never been thoroughly investigated. Given the high energy demand of SM, we hypothesized that changes in energy homeostasis caused by the loss of LAL might affect muscle biology. To test our hypothesis, we compared SM from Lal−/− mice with those of their wild-type (Wt) littermates. We found that muscles from Lal−/− mice exhibited morphological and biochemical abnormalities that affected SM phenotype, fiber type distribution, proteomic profiles, and mitochondrial functions, which were associated with compromised exercise performance on a treadmill. | Materials and Methods
Animals
Male young (8–12 or 12–17 weeks of age) and mature (40–55 weeks of age), as well as female young (19–21 weeks of age) and mature (33–44 weeks of age) Lal−/− mice and their corresponding Wt littermates on a C57BL/6J background were used for the experiments. We analyzed quadriceps (QU), gastrocnemius (GA), tibialis anterior (TA) (higher percentage of fast-twitch glycolytic fibers), and soleus (SO) (higher percentage of slow-twitch oxidative fibers) harvested under fed and 6- or 12-h fasted conditions. Mice were maintained in a clean, temperature-controlled (22 ± 1 °C) environment on a regular light–dark cycle (12-h light, 12-h dark). Animals had unlimited access to chow diet (4 % fat and 19 % protein; Altromin 1324, Lage, Germany). All animal experiments were performed according to the European Directive 2010/63/EU in compliance with national laws and were approved by the Austrian Federal Ministry of Education, Science and Research, Vienna, Austria (2020-0.129.904; 2022-0.121.513; BMWFW-66.010/0081-WF/V/3b/2017).
Histology and immunofluorescence
SM were carefully dissected, mounted on Tissue-Tek® O.C.TTM (Sakura Finetek, Hatfield, PA), and snap frozen in liquid nitrogen-cooled 2-methylbutane for 10–20 s. Samples were stored at −80 °C prior to cryosectioning. SM cryosections were washed with PBS and blocked with 0.05 % TBST (0.05 % Tween-20 in TBS) containing 10 % anti-goat serum. Slides were incubated with monoclonal anti-myosin (MYH7) (#M8421, 1:300; Sigma–Aldrich, St. Louis, MO) or anti-laminin antibodies (#PA1-16730, 1:500, Thermo Scientific, Waltham, MA) in blocking solution overnight at 4 °C. After washing with TBS, sections were incubated with secondary goat anti-rabbit Alexa Fluor-488 (#A-11008, 1:250) and anti-rabbit Alexa Fluor-594 (#A-11012, 1:250, both Thermo Fisher Scientific, Waltham, MA) antibodies in TBST plus anti-goat serum for 1 h at RT, followed by a 10-min incubation with DAPI. Slides were mounted with Dako Fluorescence Mounting Medium (Agilent Technologies, Santa Clara, CA) and visualized using an Olympus BX63 microscope equipped with an Olympus DP73 camera (Olympus, Shinjuku, Japan). The cross-sectional area (CSA) and Feret diameter of myofibers were determined using Fiji software (ImageJ® Version 1.52d; plugin “Muscle morphometry”). The areas of immunofluorescently stained fibers were quantified using ImageJ software (Version 1.53r).
RNA isolation, reverse transcription, and real-time PCR
RNA from SM was isolated using TRIsureTM (Meridian, Memphis, TE), and 0.5–1 μg RNA was reverse transcribed using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Carlsbad, CA). Real-time PCR was performed with 6 ng of cDNA and the primer sequences listed in Table S1 on a CFX96 Real Time SystemTM (Bio-Rad Laboratories, Hercules, CA) using GoTaq® qPCR Mastermix (Promega, Madison, WI). Samples were analyzed in duplicate and normalized to cyclophilin A expression as the reference gene. Expression profiles and associated statistical parameters were determined by the 2 −ΔΔCt method.
Proteasome activity assay
To determine chymotrypsin-like, trypsin-like, and caspase-like activities in SM, we used the Proteasome Activity Fluorometric Assay Kit II (UBPBio, Aurora, CO) according to the manufacturer’s protocol. The standard curve was used to calculate the absolute amounts of released 7-amino-4-methylcoumarin (AMC) fluorescence in each sample.
Amino acid (AA) quantification
Plasma samples (70 μL) from fed and fasted female mice (aged 33–44 weeks) were mixed with 50 μL of 1.5 M perchloric acid, vortexed, and kept at RT for 2 min before adding 1.125 mL of water and 25 μL of 2 M K 2 CO 3 . The tubes were again vortexed, centrifuged at 3,000× g for 5 min, and the supernatant was collected and stored at −80 °C until analysis. AA were separated using a high-performance liquid chromatography (HPLC) system equipped with an LC20AD pump (Shimadzu, Kyoto, Japan), an autosampler (Waters 717 plus; Waters, Milford, MA), and a scanning fluorescence detector (Waters 474, Waters, Milford, MA) controlled by LabSolutions software. Chromato-graphic separation was performed using a Supelcosil LC18 3 μm column (150 × 4.6 mm) (Sigma–Aldrich, St. Louis, MO). The autosampler was programmed in addition mode to mix 25 μL of sample with 25 μL of o-phthalaldehyde reagent in the sample loop. The delay time was set to 1 min. A mixture of phase A (0.1 M sodium acetate, methanol, and tetrahydrofuran, pH 7.2) and phase B (methanol) was used as the mobile phase at a flow rate of 1.1 mL/min). The gradient program is shown in Table S2 . Fluorescence was measured at an excitation wavelength of 340 nm and an emission wavelength of 455 nm.
Western blotting
SM samples were lysed in RIPA buffer using Precellys (Bertin Instruments, Bretonneux, France). Fifty micrograms of protein were separated by SDS-PAGE, transferred to a PVDF membrane, and incubated with the following anti-mouse antibodies: monoclonal anti-myosin (MYH7) (#M8421, 1:500; Sigma–Aldrich, St. Louis, MO), pAKT (Ser473) (#4051,1:1,000), AKT (#9272, 1:1,000), p4eBP1 (#2855, 1:1,000), and 4eBP1 (#9644, 1:1,000; all from Cell Signaling, Danvers, MA). GAPDH (#2118, 1:1,000; Cell Signaling, Danvers, MA) and α-tubulin (NB100-690, 1:1,000; Novus, Centennial, CO) antibodies were used as loading controls. HRP-conjugated anti-rabbit (#31460, 1:2,500; Thermo Fisher Scientific, Waltham, MA) and anti-mouse (P0260, 1:1,000; Dako, Glostrup, Denmark) were visualized using the ClarityTM Western ECL Substrate Kit on a ChemiDocTM MP imaging system (both Bio-Rad Laboratories, Hercules, CA). pAkt/AKT and p4eBP1/4eBP1 ratios were estimated by densitometry (ImageJ® Software, Version 1.53r). MYH7 expression was normalized to the expression of GAPDH.
Sample preparation, data acquisition, and metabolomic analysis by nuclear magnetic resonance (NMR)
Snap-frozen GA samples from young male mice were processed as described previously [ 29 ]. NMR spectra were processed and analyzed as recently described [ 30 ].
Measurement of ATP by mass spectrometry
SM samples from young male mice (~ 25 mg) were transferred to 2 mL Safe-Lock PP-tubes and sonicated in 500 μL 80 % MeOH containing internal standards (670 pmol glutarate, 168 pmol d4-glycocholic acid; Sigma–Aldrich, St. Louis, MO) using a Bioruptor Pico (30 min, 4 °C, 30 s ON/30 s OFF, frequency high; Diagenode, Denville, NJ). After addition of 300 μL ddH 2 O and 900 μL methyl tertbutyl ether (MTBE), samples were incubated for 15 min at 4 °C under shaking. After centrifugation (18,213× g , 10 min, 4 °C), 800 μL of the upper phase was removed and replaced with 800 μL of artificial upper phase (MTBE/MeOH/ddH 2 O, 9/4/4, v/v/v). After re-incubation and centrifugation as described above, the complete upper phase was removed, and 800 μL of the lower phase was collected and dried using a SpeedVac (Thermo Fisher Scientific, Waltham, MA). Water-soluble metabolites were resolved in 100 μL of 70 % acetonitrile (ACN)/30 % H 2 O/0.5 mM medronic acid and used for liquid chromatography/mass spectrometry analysis. Tissue residues were dried, solubilized in 0.3 N NaOH (55 °C, ~4 h), and protein content was determined using PierceTM BCA reagent (Thermo Fisher Scientific, Waltham, MA) according to the manufacturer’s guidelines. For external calibration, ATP concentrations (20 pmol - 200 nmol; Sigma–Aldrich, St. Louis, MO) were prepared in 500 μL of 80 % MeOH containing an internal standard and processed as described above.
Chromatographic separation was performed on a Vanquish UHPLC + system (Thermo Fisher Scientific, Waltham, MA) equipped with an ACQUITY UPLC BEH Amide column (2.1 × 150 mm, 1.7 μm; Waters, Milford, MA) using an 18-min gradient (400 μL/min) of 97 % solvent A (ACN/ddH 2 O, 95/5, v/v; 10 mM NH 4 FA, 10 mM NH 3 ) to 65 % solvent B (ddH 2 O/ACN, 95/5, v/v; 20 mM NH 4 FA, 20 mM NH 3 ). The column compartment was kept at 40 °C. A QExactive Focus mass spectrometer (Thermo Fisher Scientific, Waltham, MA) equipped with a heated electrospray ionization source (HESI II) was used to detect metabolites in negative data-dependent acquisition mode ( m/z 60–900). ATP was identified based on the accurate m/z of the [M–H]- ion (<5 ppm) and comparison of the retention time and MS2 spectra to a synthetical reference compound (Sigma–Aldrich, St. Louis, MO). Blanks were subtracted from ATP peak areas, normalized to the internal standard, and quantified by comparing ATP/standard ratios to the external calibration curve. ATP concentrations per sample were normalized using the sample wet weight.
2-Deoxy-D-glucose uptake
Six-hours fasted young female mice were injected intraperitoneally with a 10 % 2-deoxy- d -glucose solution in PBS containing 2 μCi [ 3 H]-2-deoxy- d -glucose/30 g body weight. Animals were sacrificed 60 min post-injection, and liver and SM were isolated, lyophilized for 48 h, and dry weight was measured. Tissues were then digested in 1 mL of 1 M NaOH overnight at 65 °C, transferred to a scintillation vial containing 8 mL scintillation cocktail, mixed properly, and stored overnight at 4 °C. Radioactivity was determined by liquid scintillation counting and normalized to dry tissue weight.
Lipid extraction and biochemical analysis
Lipid extraction and quantification of TG, total cholesterol (TC), free cholesterol (FC), and CE concentrations were performed as previously described [ 31 ].
Analysis of acylcarnitines by mass spectrometry
SM samples from overnight fasted young (8–12 weeks old) male mice were pulverized in liquid nitrogen and 5–15 mg were transferred to 2 mL Safe-Lock PP-tubes. Lipids were extracted according to Matyash et al. [ 32 ]. In brief, samples were homogenized with two 6-mm steal beads on a Mixer Mill (Retsch, Haan, Germany; 2 × 10 s, frequency 30/s) in 700 μL MTBE/MeOH (3:1) containing 500 pmol butylated hydroxytoluene, 1 % acetic acid, and 3 pmol palmitoyl-1,2,3,4– 13 C4- l -carnitine (Sigma–Aldrich, St. Louis, MO) as internal standards. Lipids were extracted under constant shaking for 30 min at RT. After addition of 140 μL dH 2 O, samples were vigorously vortexed (3 × 10 s) and centrifuged at 1,000× g for 15 min. Thereafter, 500 μL of the upper, organic phase was collected and dried under a stream of nitrogen. Lipids were dissolved in 500 μL MTBE/methanol (3:1) and diluted 1:5 in 2-propanol/methanol/dH 2 O (7:2.5:1) for UHPLC-QqQ analysis. The remaining protein slurry was dried and used for BCA protein determination after lysis in 300 μL of 0.3 N NaOH at 60 °C. Chromatographic separation was performed on a 1290 Infinity II LC system (Agilent, Santa Clara, CA) equipped with a Zorbax RRHD Eclipse Plus C18 column (2.1 × 50 mm, 1.8 μm; Agilent) and a 10-min gradient of 95 % solvent A (H 2 O; 10 mM ammonium acetate, 0.1 % formic acid, 8 μM phosphoric acid) to 100 % solvent B (2-propanol; 10 mM ammonium acetate, 0.1 % formic acid, 8 μM phosphoric acid) at a flow rate of 500 μL/min. The column compartment was kept at 50 °C. Lipids were detected in positive mode using a 6470 triple quadrupole mass spectrometer (Agilent, Santa Clara, CA) equipped with an ESI source. Acylcarnitine species were analyzed by dynamic multiple reaction monitoring ([M+H] + to m/z 84.9, CE 28, Fragmentor 164, CAV 5). Data acquisition and processing were performed by MassHunter Data Acquisition software (Version 10.0 SR1, Agilent, Santa Clara, CA) and MassHunter Workstation Quantitative Analysis for QQQ (Version 10.0, Agilent, Santa Clara, CA), respectively. Data were normalized for recovery, extraction, and ionization efficacy by calculating analyte/internal standard ratios (AU) and expressed as AU/μg protein.
Measurement of mitochondrial respiration and fatty acid oxidation (FAO)
Oxygen consumption for estimation of mitochondrial respiration and FA oxidation (FAO) in permeabilized SM fibers was measured with a high-resolution Oxygraph-2k respirometer (Oroboros Instruments, Innsbruck, Austria) as described previously [ 33 ]. Briefly, the predominantly oxidative part of GA was separated into small bundles and permeabilized with saponin (50 μg/mL). One to 3 mg of the fibers were transferred to the calibrated respirometer, which contained 2 mL of respiration medium (MiRO6; 110 mM d -sucrose, 60 mM potassium lactobionate, 0.5 mM EGTA, 3 mM MgCl 2 , 20 mM taurine, 10 mM KH 2 PO 4 , 20 mM HEPES, 1 g/L bovine serum albumin, and ~ 280 U/mL catalase) in each chamber.
Substrate/uncoupler/inhibitor (SUIT) protocol 11 was used to examine mitochondrial respiration. The following substrates and inhibitors were added sequentially after the oxygen slope was stable: 10 mM malate followed by 10 mM glutamate (basal respiration), 5 mM ADP (active ADP-stimulated state), 10 μM cytochrome C, 10 mM succinate, 1 μM carbonyl cyanide m-chlorophenylhydrazone (CCCP), 0.5 μM rotenone, and 2.5 μM antimycin. To analyze FAO, we used the SUIT-005 protocol with some modifications. The following substrates and inhibitors were added sequentially after the oxygen slope was stable: 100 μM and 2 mM malate (basal respiration and respiratory stimulation of FAO) (F-pathway), followed by 25 mM ADP, 10 μM cytochrome C, 5 mM pyruvate (respiratory stimulation by simultaneous action of the F-pathway and NADH electron transfer pathway (ETP, N-pathway) together with convergent electron), 10 mM succinate, 1 μM CCCP, 0.5 μM rotenone, and 2.5 μM antimycin. Respiration signals were analyzed using DatLab O2k-6 software.
Electron microscopy
GA was collected from Lal−/− and corresponding control mice perfused with 4 % paraformaldehyde. The muscles were then processed as previously described [ 31 ].
Quantification of the number of mitochondria
The concentration of extracted DNA was estimated using Nanodrop (Peqlab, Darmstadt, Germany) and diluted to 10 ng/μL for qPCR amplification to compare the expression of 16S (mitochondrial gene) with that of hexokinase 2 ( hk2 , nuclear gene). Primer sequences are listed in Table S1 . The number of mitochondria was determined from the ratio of mitochondrial (mt) to nuclear (n)DNA as previously described [ 34 ].
Measurement of maximum O 2 consumption (VO 2 max ) and peak effort testing using treadmills
Effort tolerance, peak effort, and maximal oxygen consumption (VO 2 max ) of Wt and Lal−/− mice were studied using a motorized treadmill coupled to a calorimetric unit with gas analyzer (CaloTreadmill, TSE Systems GmbH, Bad Homburg, Germany) as described previously [ 35 ]. Briefly, mice were subjected to a ramp running protocol with an initial adaptation velocity of 3 m/min for 60 s, followed by a constant acceleration of 3 m/min without inclination. The exercise session ended at maximal exhaustion, defined as the animal’s inability to maintain running speed despite being in contact with the electrical grid for more than 5 s. VO 2 max and run distance were determined at the point at which oxygen uptake reached a plateau during exhaustive exercise. Maximum workload was calculated as the final running distance multiplied by body weight and divided by 1,000.
Sample preparation and processing for proteomics analysis
The “red” deep proximal and medial and the “white” most superficial parts of GA were lysed in 100 mM Tris–HCl (pH 8.5) containing 1 % SDS, 10 mM Tris(2-carboxyethyl)phosphine, and 40 mM chloroacetamide using Bead Mill Max in combination with 2.8 mm ceramic beads (VWR International GmbH, Darmstadt, Germany). Samples were then reduced and alkylated at 95 °C for 10 min and centrifuged at 7,000× g and 4 °C for 5 min to remove cell debris. After protein estimation by PierceTM BCA Protein Assay (Thermo Fisher Scientific, Waltham, MA), 50 μg of each sample was precipitated with acetone, dissolved in 50 mM Tris–HCl (pH 8.5), and digested with Promega Trypsin/LysC Mix (25:1) by overnight shaking at 37 °C. Thereafter, 4 μg of the peptide solution was acidified to a final concentration of 1 % trifluoroacetic acid and desalted using self-made stage-tips with styrenedivinylbenzene - reversed phase sulfonate as material.
Proteome analysis by liquid chromatography-tandem mass spectrometry (LC-MS/MS)
Peptides were separated on the UltiMateTM 3000 RSLCnano Dionex system (ThermoFisher Scientific, Waltham, MA) using an IonOpticks Aurora Series UHPLC C18 column (250 mm × 75 μm, 1.6 μm) (IonOpticks, Fitzroy, Australia) by applying an 86.5 min gradient at a flow rate of 400 nL/min at 40 °C (solvent A: 0.1 % formic acid in water; solvent B: acetonitrile with 0.1 % formic acid; 0–5.5 min: 2 % B; 5.5–25.5 min: 2–10 % B; 25.5–45.5 min: 10–25 % B, 45.5–55.5 min: 25–37 % B, 55.5–65.5 min: 37–80 % B, 65.5–75.5 min: 80 % B; 75.5–76.5 min: 80-2% B; 76.5–86.5 min: 2 % B). The timsTOF Pro mass spectrometer (Bruker Daltonics GmbH, Bremen, Germany) was operated as follows: positive mode, enabled trapped ion mobility spectrometry (TIMS), 100 % duty cycle (ramp 100 ms); source capillary voltage: 1600 V; dry gas flow: 3 L/min, 180 °C; scan mode: data-independent parallel accumulation−serial fragmentation as previously described by Meier [ 36 ] using 21 x 25 Th isolation windows, m/z 475–1,000; 0 Th overlap between windows. Two and three isolation windows were fragmented per TIMS ramp after the MS1 scan, respectively (overall DIA cycle time: 0.95 s).
LC-MS/MS proteomics data processing, bioinformatics, and statistical analysis
Raw data files were analyzed and proteins were quantified using DIA-NN software (version 1.8.1) [ 37 , 38 ]. The SwissProt Mus musculus proteome database in fasta format (containing common contaminants; downloaded on 2021/08/17, 17,219 sequences) was used for a library-free search with false discovery rate (FDR) set to 1 %. Deep learning-based spectra and retention time prediction was enabled, minimum fragment m / z was set to 200 and maximum fragment m / z to 1800. N-terminal methionine excision was enabled, and the maximum number of trypsin missed cleavages was set to 2. The minimum peptide length was set to 7 and the maximum to 30 AA. Cysteine carbamidomethylation was used as fixed modification and methionine oxidation as variable modification. DIA-NN optimized the mass accuracy automatically using the first run in the experiment. Data processing using protein group quantities and functional analysis were done with Perseus software version 1.6.15.0, Jupyter Notebook using Python version 3.9 and Cytoscape. Protein intensities were log2-transformed before filtering the data. To avoid exclusion of relevant proteins that were not detected because of low expression in one of the conditions (either Wt or Lal−/−), data were filtered for at least 4 valid values from 5 to 6 samples in at least one group and missing values were replaced by random values from the Gaussian distribution (width of 0.3, downshift of 1.8). Principal component analyses were performed on standardized data (z-scored) and visualized with Jupyter Notebook using the Pandas, Numpy, Matplotlib, Sklearn, Seaborn, and Bioinfokit packages. Two-sample t-tests followed by correction of multiple testing by the permutation-based FDR method were used to identify altered protein groups (S0 = 0.1, FDR <0.01). Enrichment analysis for Gene Ontology Biological Process (GOBP), GO Cellular Component (GOCC), and Reactome pathways was performed using the PANTHER enrichment test for log2-fold changes of proteins [ 39 , 40 ]. Significantly changed pathways are shown in Supplementary Table S3 .
Statistical analyses
Statistical analyses and graphs were performed using GraphPad Prism 9 software. Significance was calculated by unpaired Student’s t-test and analysis of variance (one-way ANOVA) followed by Bonferroni post-test. Data are shown as mean ± SD or + SD. The following levels of statistical significance were used: *p < 0.05, **p ≤ 0.01, ***p ≤ 0.001. Bioinformatical and statistical analysis of -omics analyses are described in section 2.18 . | Results
Reduced cross-sectional area and SM mass in Lal −/− mice
To study the consequences of LAL loss in SM, we isolated gastrocnemius (GA), quadriceps (QU), tibialis anterior (TA), and soleus (SO) and compared the mass between male Wt and Lal−/− mice in the fed (15–16 weeks of age, designated young) and fasted (40–55 weeks of age, designated mature) states. The absolute ( Figure 1A ) and relative weights ( Figs. S1B and C ) of QU, TA, and GA were drastically reduced in Lal−/− mice at young and mature ages, whereas the body weights of Lal−/− mice were approx. 30 % lower compared to controls ( Fig. S1A ). The weight of SO remained unchanged and comparable with those of Wt mice ( Figure 1A , S1B,C ). Of note, the muscles from Lal−/− mice were paler than the muscles from Wt mice ( Fig. S1D ). To quantitatively and qualitatively assess muscle fibers and estimate the muscular phenotype, we determined the cross-sectional area (CSA) and minimum Feret diameter of myofibers by quantifying the laminin-stained area with Image J. Consistent with the reduced muscle mass in Lal−/− mice, the mean fiber CSA ( Figure 1B,C ) and Feret diameter ( Figure 1B,D ) of QU, TA, and GA were significantly lower in Lal−/− mice compared to their Wt littermates. Examination of the SM sections and use of the “Muscle morphometry” plugin of Image J failed to reveal signs of myopathy, such as centronucleated myofibers, in either genotype. In summary, these findings demonstrate reduced muscle size in Lal−/− mice.
Unaffected protein turnover in SM of Lal−/− mice
During cold exposure, chow diet-fed Lal−/− mice utilize their muscle AA as an additional energy sources, as indicated by the upregulation of the muscle proteolysis markers Murf1 and Atrogin1 in GA [ 41 ]. In contrast to the cold-exposed animals, Murf1 and Atrogin1 mRNA levels were downregulated in GA ( Figure 2A ), QU ( Fig. S2A ), and TA ( Fig. S2B ) but not in SO ( Fig. S2C ) of Lal−/− mice in the fed state, whereas no difference was evident in the fasted state in GA ( Figure 2B ), QU ( Fig. S2D ), TA ( Fig. S2E ), and SO ( Fig. S2F ).
One of the key metabolic mechanisms controlling muscle wasting is the ubiquitin proteasome system [ 42 ]. In GA ( Figure 2C ), QU ( Fig. S2G ), and TA ( Fig. S2H ) from Lal−/− mice, chymotrypsin-like activity was reduced, whereas trypsin-like and caspase-like proteasome activities were comparable between the genotypes, arguing against activation of the ubiquitin proteasome system to degrade proteins for energy supply in Lal−/− SM. Analysis of plasma AA concentrations revealed decreased abundance of the glucogenic AA glutamine (Gln) in the fed condition ( Figure 2D ). Increased concentrations of branched-chain AA such as valine (Val) ( Figure 2D ), glucogenic/ketogenic isoleucine (Ile) ( Fig. S2I ), and the ketogenic AA leucine (Leu) ( Fig. S2J ) in fasted Lal−/− mice indicate minor alterations in AA metabolism, however, may not be associated with muscle fiber degradation in Lal−/− mice.
The IGF-1/Akt/mTOR pathway is a crucial intracellular regulator of muscle mass [ 43 – 45 ]. However, we failed to detect any significant changes in the ratio of pAkt/Akt and p4eBP1/4eBP1 protein expression in Lal−/− QU ( Figure 2E,F ) and GA ( Figs. S2K and L ) as well as Igf1 ( Fig. S2M ) gene expression in QU and GA from fed Lal−/− and Wt mice.
Loss of LAL impacts SM energy metabolism
The intramuscular stores of ATP are generally relatively small [ 46 ]. The markedly decreased ATP concentrations in QU of Lal−/− mice ( Figure 3A ) would have to be compensated by activation of metabolic pathways such as phosphocreatine and muscle glycogen degradation followed by anaerobic glycolysis and aerobic carbohydrate and lipid catabolism [ 46 , 47 ]. Creatine and glycogen concentrations were slightly increased in the GA of Lal−/− mice ( Figs. S3A and B ). Consistent with our previous observation of reduced circulating glucose levels and increased glucose uptake in SM [ 16 ], we observed increased [ 3 H]2-deoxy-D-glucose uptake in all SM examined ( Figure 3B ). Surprisingly, glucose uptake was increased not only in SM rich in glycolytic fibers but also in the highly oxidative SO.
Lal−/− mice accumulate lipid-laden lysosomes in ectopic tissues such as the liver, spleen, and small intestine [ 12 , 15 ]. In contrast to these tissues, we found increased CE concentrations in ad libitum - fed Lal−/− mice only in GA ( Figure 3C ), whereas in SO we found only a tendency toward increased CE but elevated concentration of FC ( Fig. S3C ). Levels of cholesterols in QU and TA ( Figs. S3D and E ) as well as TG were comparable in all SM ( Fig. 3C , S3C-E ). In the fasted state, mature Lal−/− mice showed a drastic decrease in TG concentrations in QU ( Figure 3D ) and trends to decreased concentrations in TA ( Figure S3F ), GA ( Fig. S3G ), and SO ( Fig. S3H ). We additionally observed higher concentrations of CE in QU ( Figure 3D ), TA ( Fig. S3F ), and GA ( Fig. S3G ) in Lal−/− mice compared to the corresponding littermates, apparently due to the increased TC concentrations also in SO ( Fig. S3H ).
During fasting, lipid oxidation is the predominant source of energy in the resting SM [ 48 ]. Since Lal−/− mice suffer from loss of adipose tissue [ 12 ] and have reduced circulating TG levels in the fasted state [ 16 ], we next analyzed whether FAO was affected in Lal−/− SM.
Despite comparable mRNA expression of genes involved in FA transport and oxidation, with the exception of slightly increased Cpt1b expression ( Fig. S3K ), total acyl-carnitine ( Figure 3E ) and individual acyl-carnitine concentrations were markedly reduced in QU ( Figure 3F ), SO ( Figure S3I ), and TA ( Fig. S3J ). Respiratory stimulation of the F-pathway at the non-phosphorylating resting state (PctM L(n) ) and active OXPHOS state (PctM P ) were reduced in permebealized muscle fibers from Lal−/− mice ( Figure 3G ). In addition, respiratory stimulation by simultaneous action of F- and N-pathways with convergent electron flow (PctPM P ), respiratory stimulation by simultaneous action of the F-, N-, succinate pathway (S-, PctPMS P ), and electron transfer capacity (PctPMS E ) were decreased ( Figure 3G ), confirming decreased FA utilization in the SM of fasted Lal−/− mice.
Despite the insufficient FA supply and acyl-CoA in the liver, hepatic mitochondrial function and energy production of Lal−/− mice were unaltered [ 16 ]. To assess SM mitochondrial function, we determined the oxygen consumption rate in permeabilized SM fibers by high-resolution respirometry. Loss of LAL resulted in a slight decrease in basal respiration (GM L ) driven by complex I substrates (glutamate and malate) and the active ADP-stimulated state (GM P ). Maximal respiration driven by complexes I and II was significantly reduced upon the addition of succinate (GMS P ) and in the presence of the uncoupler CCCP (GMS C ) in myofibers from Lal−/− mice, indicating general consequences on mitochondrial coupling ( Figure 3H ). This result suggests impaired mitochondrial function in Lal−/− SM, regardless of the number or structure of mitochondria, as morphology ( Figure 3I ) and mtDNA ( Fig. S3L ) remained unaltered between the genotypes.
Increased expression of oxidative myofiber proteins in Lal−/− mice
The distribution and abundance of specific myosin heavy chain (MyHC) isoforms dictate the predominance of oxidative (slow-twitch) or glycolytic (fast-twitch) fibers, thereby affecting the functional properties and metabolic features of SM [ 49 ]. Despite increased 2-deoxy- d -glucose uptake ( Figure 3A ), mRNA expression of Myh7 , specific for oxidative fibers, was increased in GA ( Figure 4A ), TA ( Fig. S4A ), and SO ( Fig. S4B ) of Lal−/− mice. The markedly increased protein expression of MYH7 in QU ( Figure 4B ), GA ( Figure 4C , S4C ), and TA ( Figure 4C,D ) confirmed the fiber type switch in Lal−/− SM.
Reduced exercise capacity in Lal−/− mice
To investigate whether the observed changes in fiber types translate into reduced physical performance, we performed an exercise tolerance test using the treadmill peak effort test coupled with indirect calorimetry. We found that Lal−/− mice ran significantly shorter distances ( Figure 5A ) and exhibited a lower total workload ( Figure 5B ) than Wt mice. Consistent with this observation, Lal−/− mice had lower maximal aerobic capacity (VO 2 max ) compared to Wt mice ( Figure 5C ). These findings possibly indicate that remodeling of the muscular phenotype translates into impaired exercise capacity in vivo .
Altered protein expression patterns in the SM of Lal−/− mice
Expression of Lipa in SM is low compared with organs with high lipoprotein turnover such as the liver, but is approximately 2-fold higher in the highly oxidative SO than in QU, TA, and GA ( Fig. S5 ). These observed differences between the SM prompted us to gain more insight into the protein abundance between more oxidative and more glycolytic SM parts of Wt and Lal−/− mice. Thus, we divided the GA into “red” (enriched with oxidative fibers) and “white” (enriched with glycolytic ones) parts and performed proteomic analyses. After filtering for a minimum of four valid values out of six in at least one group and imputing missing values, we quantified 3917 proteins in red and 3852 proteins in white muscle fibers. Statistical analysis of the muscle proteome of Lal−/− mice (FDR <0.01, S0 = 0.1) revealed 567 (300 down- and 267 upregulated) and 430 (215 down- and 215 upregulated) significantly changed proteins in red and white muscle fibers, respectively ( Table S3 ). Principal component analysis showed a clear separation between Wt and Lal−/− samples in oxidative ( Fig. S6A ) and glycolytic ( Fig. S6B ) parts of GA, with significant up- or downregulation of various proteins in both the red and white parts of the Lal−/− GA, as shown in the volcano plots ( Figure 4A,B ). The highly upregulated proteins found in both red and white muscle fibers included myosin-7 (MYH7), ceruloplasmin (CERU), cathepsin S (CATS), haptoglobin (HPT), alpha-1-acid glycoprotein 1 and 2 (A1AG1, A1AG2), glutamine synthetase (GLNA), and several types of troponins specific to slow SM (troponin I (TNNI1), troponin C (TNNC1)) ( Figure 6A,B ; Table S3 ). Some of the highly downregulated proteins observed in both parts of GA included insulin-like growth factor-binding protein 5 (IBP5), mitochondrial creatine kinase S-type (KCRS), collagen alpha-1(III) chain (CO3A1), and NADH dehydrogenase [ubiquinone] 1 alpha subcomplex assembly factor 2 (NDUF2) ( Figure 6A,B ; Table S3 ). A substantial number of the upregulated proteins were related to muscle structure and function, whereas the downregulated proteins were associated with mitochondria and metabolism.
We next performed PANTHER enrichment analysis to identify up- and downregulated GOBP and GOCC terms as well as Reactome pathways ( Table S3 ). Consistent with reduced mitochondrial respiration ( Figure 3G ), highly significant GOBP terms downregulated in the oxidative part of Lal−/− GA were associated with mitochondrial function and structure, whereas upregulated terms involved RNA processing ( Figure 6C ). In contrast, the upregulated GOBP terms in the glycolytic part of Lal−/− GA mainly involved various metabolic and catabolic processes ( Figure 6D ). We also found that protein folding is a significantly downregulated GOBP in Lal−/− white GA ( Figure 6D ).
We next examined how the loss of LAL affects the expression of proteins annotated to various cellular components. GOCC terms downregulated in the oxidative part of Lal−/− GA included respira-some, mitochondrial membrane, and mitochondrial respiratory chain complex I ( Fig. S6C ), whereas no GOCC terms were downregulated in the glycolytic part ( Fig. S6D ). Nucleosome and myosin filament were among the highly significant upregulated GOCC terms in the glycolytic Lal−/− GA part ( Fig. S6D ). Similarly, in the oxidative part of Lal−/− GA, nuclear protein-containing complex, myosin complex, and spliceosomal complex were upregulated ( Fig. S6C ).
To investigate metabolic clustering of regulated proteins in more detail, we performed Reactome pathway enrichment analysis. Selected highly upregulated Reactome pathways in the oxidative part of Lal−/− GA were strongly associated with RNA splicing, gene expression, and muscle contraction, whereas the downregulated terms included translation and citric acid cycle ( Figure 6E ). In the glycolytic part of Lal−/− GA, upregulated Reactome pathways included transport of small molecules, lipid and energy metabolism, whereas the down-regulated terms were related to respiratory electron transport, endosomal sorting complex, and mitochondrial iron-sulfur cluster biogenesis ( Figure 6F ).
Network analysis using Cytoscape based on protein–protein interactions represented by the significantly differentially expressed proteins revealed comparable interacting clusters in both parts of GA. In particular, we observed clusters formed by several downregulated proteins that play a role in oxidative phosphorylation and mitochondria ( Figure 6G and S6E ), consistent with reduced ATP concentrations ( Figure 3A ) and the mitochondrial dysfunction in Lal−/− SM determined by high-resolution respirometry ( Figure 3G ). Another group of downregulated proteins was clustered in protein processing in the ER and protein folding, whereas upregulated proteins formed clusters related to muscle contraction and muscle development ( Figure 6G and S6E ). This finding further confirmed the dysfunction of SM mitochondria and fiber switch in Lal−/− mice. | Discussion
The impact of LAL-D on the pathophysiology of SM is still unclear, although decreased muscle size and muscle weakness were reported as characteristic features of LAL-D patients [ 23 – 26 , 28 ]. Compromised lipid homeostasis in Lal−/− mice, arising from impaired hepatic and intestinal lipid metabolism as well as lipodystrophy, leads to increased glucose consumption despite unaltered insulin levels, which ultimately results in reduced glucose concentrations in plasma and liver [ 16 ]. Since SM require a considerable amount of energy to function efficiently, alterations in lipid and/or glucose metabolism may influence the metabolic state of the muscle and thus the proper functionality of the organ. We therefore aimed to characterize muscle structure, function, and metabolism in an animal model of LAL-D. Our findings demonstrate that the phenotype and metabolism of Lal−/− SM are substantially impaired most likely due to an energy deficit and impaired energy metabolism associated with mitochondrial dysfunction.
Muscles from Lal−/− mice are smaller and exhibit diminished myofiber CSA and Feret diameter. However, preliminary data from our laboratory failed to reveal any differences in SM mass (QU) between 2-week-old Lal−/− and Wt mice or any alterations in muscle differentiation and myogenesis marker genes (data not presented), despite the crucial role of LAL during early development [ 31 ]. Thus, we assumed that the reduced SM mass and size worsened with age in Lal−/− mice, reflecting either growth retardation [ 50 ] or progressive loss of SM mass due to muscle proteolysis [ 51 ]. Plasma AA could serve as a substrate for the translation of muscle proteins [ 52 ]. The concentrations of glucogenic AA, which were expected to be increased during starvation due to muscle proteolysis, especially in mature mice, were unchanged or even reduced (Gln and Arg) in Lal−/− compared to Wt mice. Elevated concentrations of the branched-chain AA Leu, Ile, and Val in fasted Lal−/− mice could meet the elevated body demand for Ala and Gln due to increased muscle proteolysis or reduced muscle protein synthesis [ 53 , 54 ]. However, unaltered proteasomal activity and expression of muscle proteolysis markers in the SM of fasted Lal−/− mice argued against muscle wasting being responsible for the smaller size and mass of Lal−/− SM.
PI3K/Akt, activated by either insulin or IGF-1, is critical for controlling protein production in SM [ 43 , 44 , 55 ]. Other key regulatory proteins involved in translation such as 4E-BP-1 are downstream targets of the mammalian target of rapamycin (mTOR) kinase, which is phosphorylated and triggered by activation of Akt [ 45 , 56 ]. Despite unaltered expression of these markers, translation itself was one of the significantly downregulated terms revealed in Reactome pathways in red muscle fibers, suggesting altered protein synthesis by various signaling pathways, e.g. the nuclear factor kappa B (NF-κB)-depen- dent pathway [ 45 ]. Of note, Lal−/− mice suffer from systemic inflammation, as demonstrated by elevated levels of pro-inflammatory cytokines and macrophage infiltration in the liver, spleen, lung, and small intestine [ 12 , 15 , 57 , 58 ]. However, despite unaltered Igf1 and Igf1r (data not shown) expression, IGFBP5, which generally enhances the effect of IGF-1, was strongly downregulated in both parts of GA. The Igfbp5 gene encoding IGFBP5 is low expressed during fasting, cachexia, diabetes, and other conditions that cause skeletal muscle atrophy [ 59 ]. Recent results also suggested that IGFBP5 modulates the action of autocrine IGF-2 in promoting myogenic differentiation [ 60 ], however, they contradict the finding that overexpression of IGFBP-5 under a constitutive promoter impairs myogenic differentiation [ 61 , 62 ]. Nevertheless, a decrease in IGFBP5 expression was described upon muscle denervation or unloading [ 63 ], which is in line with decreased locomotor activity [ 41 ] and physical capacity in Lal−/− mice during the treadmill peak performance test. It is important to mention that cardiac abnormalities have not been described in Lal−/− mice. Whether the pathophysiological phenotype of Lal−/− lungs [ 57 ] contributes to physical performance of Lal−/− mice is currently unknown.
Lal−/− mice display reduced muscle ATP concentrations, resulting in compensatory activation of various metabolic pathways, including increased glucose uptake and degradation of phosphocreatine, glycogen, and lactate [ 16 ]. While TG levels remain unchanged in Lal−/− mice in the fed state, fasting leads to reduced TG concentrations in different muscles. The significant decrease in total acyl-carnitine levels and the confirmed reduced FA utilization further underscore the complex metabolic dysregulations in Lal−/− mice that affect both energy production and substrate utilization.The distribution and abundance of specific MyHC isoforms dictate the predominance of “red” oxidative (slow-twitch) or “white” glycolytic (fast-twitch) fibers, thereby influencing the functional properties and metabolic features of the SM. Myofibers of vertebrate SM differ in their contractile properties, mitochondrial density, and metabolic characteristics, with different SM exhibiting a combination of various fibers. Slow-twitch fibers are characterized by the presence of type I MyHC expression and high mitochondrial density, which is predominantly associated with an oxidative metabolism. Fast-twitch fibers express type II MyHCs sub-divided into IIa, IIx, and IIb [ 49 ]. Thus, QU, TA, and GA are mainly composed of MyHCIIb (fast glycolytic fibers) but in varying proportions, whereas SO has a combination of MyHCI and MyHCIIa [ 64 ]. Changes in muscle size may also be associated with a switch in muscle fiber types, since slow (type I) fibers have a smaller CSA than fast (type II) fibers, which was termed the “muscle fiber type – fiber size paradox” [ 65 ]. When we analyzed gene expression and proteome signature changes, MYH7 and other specific slow oxidative muscle markers such as TNNC1 and TNNI1 were upregulated in GA of Lal−/− mice, suggesting that loss of LAL leads to a switch from fast type II to slow type I myofibers with reduced CSA and Feret diameter. However, the mechanism underlying the increased expression of proteins specific to type I fibers remains unclear, especially since glucose consumption was elevated in SM of Lal−/− mice. SM with fast-twitch glycolytic fibers are more sensitive to energy substrate deprivation than slow-twitch oxidative fibers under atrophic conditions [ 66 ]. Thus, Lal−/−SM may adapt to the dysfunctional whole-body energy homeostasis, as slow fibers work sufficiently but consume less ATP with lower power utilization than fast fibers [ 67 ].
An increased number of oxidative fibers should be associated with more mitochondria, contributing to enhanced cellular respiration [ 68 ]. However, we observed decreased oxidative capacity in permeabilized fibers isolated from Lal−/− GA, indicating mitochondrial dysfunction in Lal−/− mice, although the number and morphology of mitochondria remained unchanged. Proteomic analyses confirmed that pathways associated with oxidative phosphorylation, electron transport chain, and ATP synthesis were the most dysregulated biological processes in the red but also the white part of Lal−/− GA. Down-regulated proteins included NDUF, SDH, UQCR, and ATP5 family members representing complexes I, II, III, and V, respectively, which were previously described to be strongly reduced in sarcopenic muscles [ 69 ]. ATP plays an essential role in preventing protein aggregation and is a major energy source required for most energy-dependent cellular functions such as protein synthesis (about 30 % of available ATP [ 56 ]), folding, translocation to the ER, and protein degradation [ 70 ]. Thus, insufficient ATP supply for protein processing may also explain the reduced muscle size in Lal−/− mice. It is worth noting that the downregulated GOBP terms and protein interaction network generated by Cytoscape confirmed impaired protein folding in both the red and white parts of GA.
Interestingly, among the most downregulated proteins, we found major urinary protein 11 (MUP11) in both the oxidative and glycolytic parts of Lal−/− GA, and MUP3 in addition in the red part. The SM was reported to be one of the major metabolic target tissues of MUP1, a close homolog of MUP3 [ 71 ]. Low concentrations of circulating MUP1 contribute to the metabolic dysregulation in obese and diabetic mice, which was markedly ameliorated by MUP1 replenishment by increasing the expression of genes involved in mitochondrial biogenesis and enhancing mitochondrial oxidative capacity, predominantly in SM [ 71 ]. Our findings may indicate a potential role of other MUPs as potential regulators in the diminished mitochondrial function in Lal−/− mice.
Loss of LAL is associated with ectopic accumulation of lipids, especially CE [ 12 , 16 , 58 , 72 , 73 ], also in SM [ 74 ]. Cholesterol accumulation may contribute to mitochondrial dysfunction, including reduced respiration and decreased ATP production [ 75 , 76 ] in Lal−/− SM. The previously described rapid development of fatigue and the reduction in ATP turnover of the SM [ 77 ] is consistent with the Lal−/− SM mitochondrial dysfunction and the reduction in ATP concentration. The decreased mitochondrial respiration may also result from an overall reduction in the import of components of the mitochondrial oxidative phosphorylation system and/or substrate transporters throughout the body as a consequence of the inaccessibility of lipids, the absence of white adipose tissue, and the rapid glucose consumption in various organs of Lal−/− mice. | Conclusion
Taken together, our data provide conclusive evidence that whole-body loss of LAL affects the phenotype and, most probably, functions of SM, due to insufficient ATP production associated with dysfunctional mitochondria and impaired energy metabolism. The described alterations result in SM fiber switch, and Lal−/− mice may serve as a model to study the complex molecular mechanisms of muscle remodeling under conditions of impaired lipid metabolism. However, it is still elusive whether the reduction in muscle mass and increased muscle fatigue are attributable to the global loss of LAL activity in SM or to the systemic loss of the enzyme, which mainly affects the liver, small intestine, and adipose tissue and is associated with severe macrophage infiltration and systemic inflammation. | Objective
Lysosomal acid lipase (LAL) is the only enzyme known to hydrolyze cholesteryl esters (CE) and triacylglycerols in lysosomes at an acidic pH. Despite the importance of lysosomal hydrolysis in skeletal muscle (SM), research in this area is limited. We hypothesized that LAL may play an important role in SM development, function, and metabolism as a result of lipid and/or carbohydrate metabolism disruptions.
Results
Mice with systemic LAL deficiency (Lal−/−) had markedly lower SM mass, cross-sectional area, and Feret diameter despite unchanged proteolysis or protein synthesis markers in all SM examined. In addition, Lal−/− SM showed increased total cholesterol and CE concentrations, especially during fasting and maturation. Regardless of increased glucose uptake, expression of the slow oxidative fiber marker MYH7 was markedly increased in Lal−/−SM, indicating a fiber switch from glycolytic, fast-twitch fibers to oxidative, slow-twitch fibers. Proteomic analysis of the oxidative and glycolytic parts of the SM confirmed the transition between fast- and slow-twitch fibers, consistent with the decreased Lal−/− muscle size due to the “fiber paradox”. Decreased oxidative capacity and ATP concentration were associated with reduced mitochondrial function of Lal−/− SM, particularly affecting oxidative phosphorylation, despite unchanged structure and number of mitochondria. Impairment in muscle function was reflected by increased exhaustion in the treadmill peak effort test in vivo .
Conclusion
We conclude that whole-body loss of LAL is associated with a profound remodeling of the muscular phenotype, manifested by fiber type switch and a decline in muscle mass, most likely due to dysfunctional mitochondria and impaired energy metabolism, at least in mice. | Supplementary Material | Acknowledgments
This work was supported by the Austrian Science Fund FWF (SFB 10.55776/F73, DK-MCD 10.55776/W1226, 10.55776/P32400, 10.55776/P30882, 10.55776/P28854, 10.55776/I3792, 10.55776/DOC-130, 10.55776/FG12), Austrian Research Promotion Agency grants 864690 and 870454, Integrative Metabolism Research Center Graz, Austrian Infrastructure Program 2016/2017, the PhD program “Molecular Medicine” and the flagship project “VascHealth” of the Medical University of Graz, BioTechMed-Graz (flagship project DYNIMO), the Province of Styria, and the City of Graz. For open access purposes, the authors have applied a CC BY public copyright license to any author accepted manuscript version arising from this submission. The authors acknowledge the excellent technical assistance of B. Schwarz, A. Ibovnik, S. Rainer, I. Pölzl, and D. Pernitsch (Medical University of Graz, Austria) and thank A. Absenger, M. Singer, and I. Hindler (Medical University of Graz, Austria) for mice maintenance.
Data Availability
The mass spectrometry proteomics datasets have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository [ 78 ] with the dataset identifier PXD045665 [email protected] . All other data will be made available upon request.
Data Availability
Data will be made available on request.
Abbreviations
amino acid
carbonyl cyanide m-chlorophenylhydrazone
cholesteryl ester
cross-sectional area
electron transfer pathway
fatty acid
fatty acid oxidation
free cholesterol
false discovery rate
gastrocnemius
gene ontology biological process
gene ontology cellular component
lysosomal acid lipase
Lal knockout
LAL deficiency
myosin heavy chain
quadriceps
skeletal muscle
soleus
tibialis anterior
total cholesterol
transcription factor EB
triacylglycerol
wild-type | CC BY | no | 2024-01-16 23:47:21 | Mol Metab. 2023 Dec 30;:101869 | oa_package/db/ce/PMC7615526.tar.gz |
PMC8550499 | 34723203 | Introduction
As a consequence of large-scale phenomena such as globalization, social integration, migrations, and progress in information and communication technology, the world has become a much more complex place than it used to be. Today, social systems, which I shall define as collections of interacting components , are much more interconnected, and actions within and outside these systems can (and often do) give rise to unexpected (or deliberately overlooked) reactions in terms of “what is reacting to what” and “with what intensity”. As a consequence, it is essential to understand in-depth such manifestations of complexity when analysing a wide range of activities, such as policy making. Indeed, policy interventions can produce consequences in apparently unrelated fields because of connections that have been overlooked or were simply unknown. Moreover, the issue that a specific intervention aims to address might have roots elsewhere, and therefore be resistant to policy intervention until these connections are clearly identified. Therefore, policy interventions in a complex environment are often effective only if they are designed with a degree of complexity that matches that of the issue they are addressing (Bar-Yam, 2015 ), lest societies collapse under a level of complexity that is no longer sustainable (Tainter, 1988 ). Clearly, this is true also for language policy issues. The complexity of human language and language-related social phenomena can be hardly denied. In this paper I provide a quick review of complexity theory, an approach specifically developed for the study of complex phenomena, and I put it in relation with language policy. My objective is to show that language policies have an intrinsically complex nature and should be studied from a complexity theory perspective. Besides, I want to discuss the potential contribution that computational methods can provide to the study of language-related matters.
Complexity theory is better described as a way of approaching the analysis of certain phenomena, a paradigm of study, rather than a theory strictu sensu. One should think of it more as a way of re-thinking things and phenomena, looking at them through a convex lens which enlarges the vision field. As we shall see in the following pages, complexity theory integrates a set of concepts and ideas derived from different disciplines and fields of research that give up a mechanistic view of the world in favour of a holistic approach, whereby the object of study is often characterized by a level of uncertainty. At the same time, this approach de-emphasizes linearity and predictability (Grobman, 2005 ). It places itself in direct opposition to the philosophical position of “reductionism”, which supports the idea that all processes and phenomena can be reduced to a collection of simpler basic parts. However, this does not amount to saying that complexity theory rules out the possibility of deducing larger macro-dynamics from individual micro-cases, quite the opposite. It simply states that a constant application of a strictly inductive logic risks being fallacious.
An in-depth discussion of complexity theory is far beyond the scope of this article. I will mention here only a few recurring traits of complex phenomena. Over the last decades, complexity has been defined in a number of different ways, each definition stressing one aspect or another. A complex system has a large number of heterogeneous , independent and interacting components , able to evolve along multiple pathways . Its development is very sensitive to initial conditions or to small perturbations, and it is analytically described through non-linear differential equations (Whitesides and Ismagilov, 1999 ). On top of being inherently complicated, complex populations are rarely deterministic, and predisposed to unexpected outcomes (Foote, 2007 ). The major problems for those who get to deal with complex systems are unintended consequences and the difficulty of making sense of a situation (Sargut and McGrath, 2011 ). All of the emphasized words in the previous sentences are highly recurrent in theoretical and applied texts dealing with complexity theory. At times, each of these aspects have been highlighted as the key characteristic of a complex system.
In the coming pages, I first provide a definition of language policy and why language matters are usually addressed through policy. Then, I go on to discuss complexity. However, instead of providing an in-depth description of the numerous aspects of complex phenomena, I will review some of them by referring directly to language-related issues. Finally, I discuss the role that computational methods can play in language policy making. | Conclusions
Throughout this paper I tried to show that computational methods can play an important role in the field of language policy. This role has been largely overlooked until recently. This does not only concern simulation-based methods, but computational methods in general, such as natural language processing and other machine learning-based methods. This is largely justified by the fact that, as I have discussed at length, language matters display many traits commonly associated with complex systems. As a matter of fact, numerous scholars have observed over the past few years that language-related issues are extremely multi-faceted and that a single disciplinary perspective can only shed light on one side at a time. Only genuinely interdisciplinary approaches can hope to capture more complexity. Nevertheless, while such approaches are strongly supported by numerous scholars in the scientific community, they still represent the exception rather than the rule. Drawing from many disciplinary backgrounds, a complexity theory approach represents a step in the direction of spelling out with greater accuracy all the causal links involved in language issues.
I would like to stress once more the great potential of simulation models, as well as computational models in general, for the purposes of policy making in all its phases, from development to evaluation. The possibility to experiment in a virtual world that is fully controlled by the policy maker represents a major advantage over randomized controlled trials involving actual people. Once the policy model is developed, various scenarios can be simulated under different conditions with virtually no cost. Conversely, setting up multiple controlled trials can be very costly, not to mention the fact that it might have important ethical implications. Moreover, controlling for the impact of external variables can be really challenging in real-life, while it is an almost trivial task in a simulated environment. Furthermore, even when there is little data for calibration and validation, a simulation model can be very useful in providing an overall idea of the type of impact that one should expect from a given policy. To conclude, I shall say that the culture of computational modelling will need to spread in the policy making environment for the full potential of simulation models to be exploited. | In this paper I argue in favour of the adoption of an interdisciplinary approach based on computational methods for the development of language policies. As a consequence of large-scale phenomena such as globalization, economic and political integration and the progress in information and communication technologies, social systems have become increasingly interconnected. Language-related systems are no exception. Besides, language matters are never just language matters. Their causes and consequences are to be found in many seemingly unrelated fields. Therefore, we can no longer overlook the numerous variables involved in the unfolding of linguistic and sociolinguistic phenomena if we wish to develop effective language policy measures. A genuinely interdisciplinary approach is key to address language matters (as well as many other public policy matters). In this regard, the tools of complexity theory, such as computational methods based on computer simulations, have proved useful in other fields of public policy.
Keywords
Open Access funding provided by Université de Genève. | Language policy and complexity
Public policy can be defined as “ an intentional course of action followed by a government institution or official for resolving an issue of public concern . Such a course of action must be manifested in laws, public statements, official regulations, or widely accepted and publicly visible patterns of behaviour” (Cochran et al. 2009 , pp. 1–2, emphasis in original). The expression “of public concern” reveals the collective nature of the issue at stake. Language is the natural means of communication, which is an inevitable part of society. As a consequence, language issues and all their social, political, and economic implications deserve the attention of public policy practitioners and scholars. The role played by policy analysis in the field of language policy has been evident to scholars (in particular to sociolinguists) since the 1970s (Jernudd, 1971 ; Rubin 1971 ; Thorburn 1971 ). Nevertheless, it received greater attention only starting from the 1990s, when a number of scholars from political science and economics started to apply policy analysis models to language policy (Grin and Vaillancourt 1999 ; Grin and Gazzola 2010 ; Gazzola 2014 ).
But what is language policy exactly? This question is rather simple and might pop up spontaneously in the mind of those who are not technically involved in it. However, language policy is something that affects everyone, passively and actively. It is fundamental to understand that almost anything involving communication and language use is the result (at least in part) of different combinations of language policy measures, from the choice of providing certain services in a given language to the drafting of school curricula. Sometimes, even the linguistic identity that an individual assigns to herself and her community might be influenced through policy measures. Besides, language policies have repercussions on society which might affect people’s life so profoundly that it is often very hard to isolate them. Suffice it to say that numerous researchers in the social sciences (and not only) often find themselves facing language issues in their daily work and have a hard time managing them. The non-negligible impact of language policies on people calls for sustained research efforts completely focusing on them. Furthermore, it is impossible to deny the importance of language policy and its very existence, as there is no reality involving communication between humans with “no language policy” — as a matter of fact, the simple fact of declining to take decisions concerning language issue is a form of language policy (which is being, anyway, communicated in a certain language).
However, providing an answer to the questions concerning the very existence of this field of research (such as: Why is it necessary? What are the objectives? What are the material results of this research?) has proved, over the last few decades, somewhat difficult. The cause of this is the lack of a generally accepted comprehensive theory of language policy. However, this is not the consequence of superficial research, but rather the opposite. Language policies are so entrenched in everyday life that they are acknowledged and practiced in all societal domains. As Ricento puts it,
Are language issues complex?
I will now reconsider language policy and some notable language-related issues from a complex perspective in order to show that language issues display the typical traits of complexity and, therefore, that they “qualify” to be considered and treated as complex issues. The direct consequence of the acknowledgement that language issues are complex implies that language policies have to be drafted in a way that takes the principles of complexity theory into account. This does not mean “changing language policies”, in that a policy is not something that exists autonomously, in its own right. A policy is an answer to a specific problem, without which it has neither reason nor legitimacy to exist. Therefore, a policy answer should reflect the characteristics of the object being treated. If we recognize that language issues are complex, the policy maker needs to adopt a complex approach to draft an answer. I provide examples illustrating different language issues. My objective is to prove that language issues are complex in their nature and therefore should be addressed by means of complexity theory.
Non-linearity and feedback loops
By non-linearity, specialists of complexity often mean that a system displays a fundamental disproportionality between cause and effect (Homer-Dixon, 2010 ). In a simple (linear) system, a small disturbance implies a small change, while a big disturbance leads to a big change. On the contrary, complex systems do not necessarily exhibit such proportionality. Small changes could imply dramatic effects and big ones could have only marginal implications. Scheffer et al. ( 2001 ) propose an interesting example from ecology to explain non-linear responses to external stimuli. They note how an ecosystem (in particular, they focus on the eutrophication process of shallow lakes, i.e., the process by which shallow lakes become overly enriched with nutrients, therefore inducing excessive growth of algae) may appear to remain unchanged or only slightly changed up to a certain critical level of human intervention (which they name “stress”), beyond which follows a catastrophic transition to a new state. However, a switch back to the previous state (in the case studied by Scheffer et al., 2001 ) requires going back much more than the simple restoration of the stress level preceding the collapse. Another good example is desertification. Increasing grazing intensity can incrementally destroy vegetation, but once desertification occurs, it is not enough to reduce (or even remove) grazers in order to stop it and restore the previous level of vegetation.
A language-related example could be the process of acquiring new vocabulary. Indeed, vocabulary learning begins at a very slow rate, which then increases and eventually slows down again (Larsen-Freeman and Cameron, 2008 ). If we were to plot this pattern on a graph with time on the x-axis and vocabulary size on the y-axis, we would not draw a line but an S-shaped curve. At the beginning it is reasonable to find it difficult to acquire new words, especially if they are substantially different from the vocabulary of one’s native language. One might find it hard to memorize new and possibly unfamiliar sounds and orthography, which make the acquisition process very slow. However, as the process goes on, one becomes more and more familiar with pronunciation and spelling and might even start making connections between words sharing the same root or having the same prefix or suffix. Besides, it is very likely that a person interested in learning vocabulary in a new language is exposed to that same vocabulary in other ways, both actively and passively, for example hearing it at the grocery store, reading it on billboards, and so on. When talking specifically of young children in the process of learning their native language, MacWhinney ( 1998 ) factors in growing cognitive capacity as an additional interaction variable. All this clearly accelerates the acquisition process. However, as one reaches a wide vocabulary, the process slows down again and approaches a horizontal asymptote. This can happen for a number of reasons. For example, it becomes less likely that one hears or reads less frequently used words; or, being able to count on a wide vocabulary and sufficient fluency to make oneself understood, one might feel less and less in need of looking up a specific term on the dictionary, eventually missing out on least common synonyms.
Several instances of non-linear patterns have been detected by scholars in language policy and economics of multilingualism. A well-known example is that of the threshold in the process of language shift, i.e., the process whereby a speech community traditionally speaking language B, gradually replaces it with language A (Grin, 1992 ). A threshold is a stage in this process where “it is too late” to go back, a point where language B will now inevitably give way to language A. Ambrose and Williams ( 1981 ) discuss the case of Welsh. Distinguishing between Welsh-speakers (including bilinguals) and Welsh monoglots, they argue that there exists a “language loss line” (slightly below 50% of monoglots, according to their empirical findings) under which the entire Welsh-speaking population starts to drop and eventually disappears. Grin ( 1992 ) further explored this intuition from a theoretical perspective, noting that there is no single threshold point. Rather, several (or better, an infinite number of) “points of no-return” exist, depending on the interaction of demographic and linguistic variables, such as the distributions of speakers across languages (in its turn affected by migration flows as well as birth and death rates) and the attitudes of people towards these languages (depending, among other things, on the availability of opportunities to use a specific language). Language survival can be attained through policy intervention. The function linking these variables tends to be non-linear. Besides, a small variation in the initial condition leads to drastic changes in the stable equilibrium eventually attained. Therefore, any action drafted by policy makers should take this non-linearity into consideration. In the same study, the author identifies a feedback loop characterizing the level of language survival (defined by a variable called “language vitality”). In particular, the latter is quite clearly influenced by intergenerational transmission as well as individual loss and acquisition because they determine the percentage of speakers of each language. At the same time these two variables are functions of language vitality. A decreasing level of language vitality can induce a decline in the level of intergenerational transmission, as well as in the level of acquisition, which eventually cannot make up for the loss of speakers over time (of course, the opposite is also true, in that an increase in language vitality increases the interest to transmit of acquire the language). Grin ( 1992 ) notes, however, that language vitality does not necessarily feed on itself.
Non-gaussianity
A non-Gaussian distribution is characterized by a higher-than-normal likelihood of extreme events, which can bring about unexpectedly big effects. The fact that extreme and unexpected events can have important consequences on language issues is easy to see. In the previous pages it was briefly mentioned that migration flows play a role in the definition of the linguistic landscape of a region. This is true both in the long-term and in the short term. In the long-term, one can think of colonization processes that made four European languages (English, French, Spanish and Portuguese) the major languages of the contemporary Americas, virtually annihilating indigenous languages. 1 Concerning the short term, one can think of the emergence of non-indigenous communities across Europe over the last few decades (either non-native to the country of settlement, such as the Romanian community in Italy, or to Europe in general, such as Latin American or African communities). This clearly has implications from a number of perspectives. One can think of the EU directive that ensures the right of suspected or accused persons to interpretation and translation in a language that they understand during criminal proceedings. 2 As a consequence, member states have an obligation to provide interpreters and translators to people speaking a language different from the local one(s). Complying with this principle is straightforward. Locating and anticipating needs is relatively easy, and so is preparing and eventually providing competent professional to meet the requested services. However, this is only simple as long as the language landscape remains constant or changes in a “predictable” way. Nevertheless, such a system may collapse very easily under the pressure caused by an (apparently) unlikely and unforeseeable event. Current migration flows seem to confirm this. Because of unexpected events (such terrorism and war in the Middle East and a long civil war in Libya), migration flows towards Europe have dramatically increased during the last few years, boosting the presence of non-indigenous people on European soil, from all sorts of different cultural and linguistic backgrounds. Such an unpredicted shock can easily undermine the functioning of the administration (not to mention socio-economic repercussions). A sudden increase in the volume and diversity of migration is not easy to cope with when it comes to granting T&I services, among other things. The receiving country may not be prepared in terms of staff to deal with incoming people. As a consequence, policy interventions in this domain should aim at boosting systemic resilience by making it flexible and able to quickly adapt to external shocks.
In the case of complex phenomena, extreme events occur more frequently than a Gaussian distribution would predict and, most importantly, they carry more weight than one could expect. Besides, if we concentrate on the average of the observed values, we might be missing an important part of the story. In complex phenomena, seemingly unlikely events are not that unlikely and can have dramatic repercussions. In general, one might be tempted to use historical observations to make predictions, assuming the existence of predetermined patterns which will eventually repeat themselves over time, in a cyclical fashion. This is not the case for complex systems, where outliers often have significant consequences. In relation to complex systems, scholars have sometimes spoken of “black swans” 3 to define those occurrences that are believed not to be possible until they actually occur. Besides, it can be argued that these events are the only ones that can seriously affect a system and have a long-term impact (such as sudden shocks in the financial markets).
I shall devote a few words to clarifying the difference between non-linearity and non-Gaussianity, as they can be easily mistaken. Both ideas focus on the fact that apparently minor issues can have important consequences. However, non-linearity is about the magnitude of the impact of small events, regardless of their likelihood or frequency. Non-Gaussianity, conversely, only reminds us that, in complex systems, one cannot rule out extreme events, if only because they are often the ones that imply the most significant impacts.
Spontaneous order and lack of central control
Spontaneous order and lack of central control are possibly the easiest complex characteristic to spot, concerning both language itself and language use. A language evolves over time, developing a rich vocabulary and a complex syntax with every speaker (often unconsciously) contributing to it (Cantor and Cox, 2009 , p. XI). Speakers reciprocally give up a part of their linguistic liberty (intended here as the ability to pronounce different sounds) to “meet halfway”. They define common words and rules in order to be able to understand each other (Adelstein, 1996 ). However, these rules are created, followed and broken continuously. As was observed above, a living language is never in equilibrium. Rather, it fluctuates around an “equilibrium region”, determined by a spontaneous tendency to maintain mutual intelligibility among speakers, and by individual use, whose peculiarity are often defined at a decentralized level. Besides, it should be noted that languages are often resistant to central control, i.e., the attempts of language scholars to regularize speech patterns (Cantor and Cox, 2009 , p. XII). If, say, a language regulator prescribed the use of a specific word or grammar rule, it is not obvious that speakers would respond positively to the imposition. It is the case, for example, of adjective agreement in many Romance languages, such as French. While the Académie française, the central institution that deals with matters pertaining the French language, prescribes the use of the masculine agreement when an adjective refers to a number of nouns with different genders, some users display a preference for a more inclusive language using, among other things, the so-called “accord de proximité” (proximity agreement). Such type of agreement provides that adjectives should agree in gender with the closest noun. 4 Therefore, it is evident that a random element coexist with a spontaneous order and a weak form of central control.
Emergence and hierarchical organization
Emergence is probably one of the (if not the single) most important characteristic displayed by complex systems and would probably deserve a whole discussion to itself. Here, however, I only offer a general introduction. A system is characterized by emergence if it exhibits novel properties that cannot be traced back to its components (Homer-Dixon, 2010 ). Some scholars call these properties “emergent properties” (Bunge, 2003 ; Elder-Vass, 2008 ). The adjective “emergent” refers to the fact that such properties are not present at the individual level, but only “emerge” as we move on to consider higher levels of aggregation. To understand this idea, one could think of utterances as sets of words. Words have their own properties (such as meaning and syntactic function) and, put together, they can form sentences. However, a sentence is more than the simple sum (or succession) of the words that it contains. It has its own meaning that emerges only when its components are put together and is also dependent on extra-verbal contextual elements. 5
A good example from the natural sciences is the saltiness of sodium chloride (i.e., table salt), which is not attributable neither to chloride nor to sodium individually. Saltiness emerges as a consequence of a (1 to 1) combination of the two elements. Elder-Vass ( 2008 ) goes on to stress that an emergent property is not only one that is not possessed by any of the parts individually, but also one that would not be possessed by the compounded entity if there were no structuring set of relations between the individual parts (and it is therefore not due to the mere co-presence of these elements). This reasoning echoes what Nobel laureate Herbert Simon argued much earlier:
One could conclude that all other characteristics of complex systems are indeed emergent properties. As a matter of fact, that is far from being incorrect. Spontaneous order and self-organisation, discussed in the previous subsections, are indeed emergent properties. They emerge only as a consequence of the existing interactions between parts and they are not inherent to any of them. Talking specifically about spontaneous order, Hayek defined it as “orderly structures which are the product of the action of many men but are not the result of human design” (Hayek, 2013, p. 36). Market dynamics leading to equilibria (in the absence of a central coordinating body) are quite an eloquent example of emergent (orderly) behaviour (Petsoulas, 2001 ).
To explain emergence within language issues, I consider the issue of clashing interests at different levels, which underlines the importance of taking also the meso-level into consideration. In general, recognizing the complex nature of language matters and adopting a tripartite perspective becomes crucial for policy makers. Let us look at another practical example, building on a discussion by Grin ( 2015 ) on the use of different languages in a higher education context: at the micro-level, a researcher walking in a (at least moderately) culturally diverse university will immediately notice that individual students have different backgrounds, different language profiles, and use all sorts of different communication strategies (speaking their own language, speaking the interlocutor’s language, code-switching, code-mixing, intercomprehension, and so on) and that these strategies are adopted by users with no external restriction; at the meso-level, a researcher will notice that universities make choices about the use of one or another languages for different purposes (e.g.: choice of languages taught as subjects; choice of language(s) of instruction, including exams and, possibly, educational materials; choice of language(s) for internal administrative purposes; choice of language(s) for external communication) and that these choices do not necessarily corresponds to micro-level strategies in terms of diversity; at the macro-level, an LPP researcher’s interest will typically concern the general choices made by the authorities (assuming we are dealing with a publicly funded education system) regarding the language(s) of instruction in universities (as well as in other educational contexts). We shall also note that interests may actually coincide between the micro and the macro-levels and that therefore we would be missing a big side of the story if we ignored meso-level entities. Let us consider country X, where X is the official language, though Y is also spoken by a newly formed community, whose members are not always fluent in X. Besides, country X has significant trade relations with country Z. Let us also consider the information written on the packaging of goods for sale in country X’s supermarkets. Such information includes ingredients, conservation methods, origin, etc. A breakdown of the interests at stake in this scenario based on the three levels of perspective will be as follows: MICRO: individual A, speaking exclusively language X, doing shopping in the local supermarket. She is clearly interested in understanding what is written on the packaging and, therefore, she wants and, as a national of country X, expects information to be provided in a language that she speaks, i.e., language X. In another aisle of the same supermarket, individual B, belonging to the immigrant community and speaking language Y, would like to have information in language Y, but is willing to struggle with language X. MESO: the CEO of a company based in country X producing goods to be sold in supermarkets. Incidentally, the CEO is also part of the immigrant community, her native language is Y, but she is also fluent in X. As she acts on behalf of a private institution, it is reasonable to assume that her sole interest is to generate profit for the company and, therefore, she would want to limit as much as possible packaging costs, which include printing information. Initially, she would avoid completely adding information in another language, but she fears that the company might lose clients. In country Z, another CEO is facing a similar situation. She would like to market the company’s product in country X, but this would imply translation costs (towards X? Y? Both X and Y?), packaging reconfiguration and, therefore, an increase in production costs. MACRO: the president of country X’s consumer protection authority. She was appointed by the newly elected government, a notable a supporter of minority rights. Her main interest is, obviously, that consumers are protected and, therefore, constantly aware of their consumption choices. She works to push the parliament of country X to pass a law that obliges companies selling products in country X to provide information in language X on the packaging, whether the company is local or based abroad. At the same time, she is working on another proposal that would introduce an obligation to add information in a non-official language spoken as a unique language by more than a certain amount of tax-payers. However, she did not put it in the law about language X because she did not want to provoke a negative reaction from the opposition.
A number of considerations can be made on the basis of this example. We note that interests at the micro and macro-levels are somewhat converging, in that in both cases the optimal solution would be to have information the language(s) spoken by the residents of country X. If one skipped the meso perspective, one would be tempted to believe that this is where the story ends, but we would be overlooking a whole other set of interests. At the meso-level we find entities such as corporations who have completely different objectives and might even consider the request for multilingual information a nuisance. It is evident that the fact that the micro and macro interests converge is not enough and that intervention at the meso-level is needed. One might argue that the companies of the example are private actors and that their behaviour cannot be determined through policy making (as far as language use is concerned). This is only partially the case. True, these private institutions have freedom to make decisions about language use as far as internal processes are concerned. Nevertheless, the government can (and often must) intervene to regulate the relationships between these companies and the people, including the use of language. Finally, we should note that the company’s profit is of interest for every individual in the company, our meso-unit, whether they are nationals, foreigners, speakers of language X or Y. Indeed, the company’s success (or failure) has substantial repercussions on its workers’ conditions. Therefore, we can note how, as mentioned, a meso-level characteristic (in this case, the interest for corporate profit and the subsequent strategies) come into existence only when the meso-level aggregation takes place.
The role of simulation in social science research
So far, I have shown that language issues display, among other things, non-linear behaviours, spontaneous order and emergence. Besides, extreme and unlikely events can have dramatic repercussions. Consequently, policies dealing with language issues should be drafted adopting a specific complexity approach. This is particularly true considering that simulations are a good substitute for real-life experiments when these are potentially expensive and burdensome. The literature on complexity theory offers a good number of examples of applications of complexity theory to public policy matters. However, complexity theory is only seldom applied to language policies. Therefore, as of today there is no such thing as a complex framework to implement language policies, able to address language challenges in a flexible and adaptive way, taking all non-trivial aspects into consideration. The general idea is that, due to the non-negligible presence of randomness, language policies (and, in general, policies addressing situations where the future in unpredictable) call for a complex approach. Therefore, traditional quantitative and qualitative methods need to be complemented by other research methods, such as computer-based simulation. In particular, agent-based models have the great advantage of relating the heterogeneous micro-behaviours of agents with different information, decision rules, and situations to the macro-behaviour of the overall system (Lempert, 2002 ).
To address the difficulties posed by complex systems, researchers often resort to large-scale controlled studies for the purpose of spelling out individual causal links. When it comes to the study of social phenomena, however, such investigations are often not possible, for a number of practical and ethical reasons. Besides, “controlling” a group of people in their social interactions is not the same as controlling, for example, the way they are treated with a specific drug, not to mention controlling the behaviour of particles or molecules. Humans and their behavioural patterns can vary in virtually countless ways and very similar conditions can sometimes lead to radically different results. It is very hard to isolate social systems from the influence of the greater network in which they exist. For example, it is not sufficient to study the behaviour of the students of a given school without considering the city in which the school is located. This makes it virtually impossible to rule out the impact of external causes on the dynamics under examination. Besides, this becomes all the more critical as global interconnectedness and interdependence increase. In short, a purely in vitro study is usually not possible in social science research. Sometimes, researchers get around this problem by resorting to theoretical modelling, which is highly mathematics-based and, thanks to its inherent formality, helps spelling out causal links with a high degree of conceptual consistency. Theoretical modelling is, in a certain sense, a way of “controlling” the experiment, in that the modeller can make assumptions about properties and behaviours. However, this still does not solve the problem of mutual influence with other systems, nor does it account for the fact that individual human beings are extremely heterogeneous in their properties. Analytical models often need to put aside such heterogeneity for the sake of mathematical tractability. This is where computational social science, the field of social science that resorts to computational methods, comes into play. Among computational methods, agent-based modelling, a type of computer simulation method, is a particularly important ally of social scientists. Agent-based models use algorithm to simulate the behaviour and interactions of micro-level agents with a view to replicating some macro-level dynamics under study.
The general idea behind computer simulations for the social sciences is relatively straightforward. Given that it is often impractical to realize large-scale controlled studies, social scientists can resort to computer simulations to recreate an in-silico version of the context of interest and simulate the dynamics considered. The environments and the agents are usually informed through real-life observation, so as to make the model representative of reality. If the model is conceptually coherent and its behaviour is validated by actually observed trends, one usually goes on to study the dynamics of the system under different conditions. In so doing, one can evaluate how and to what extent different variables impact the overall system.
Applications of agent-based modelling have been increasing at an accelerating pace and concern a wide range of fields. Simply looking at a few recent publications in which agent-based modelling was the main methodology, we find applications as varied as: a simulation of firms' decision-making processes with a view to detecting the relation between the heterogeneity of firm sizes and innovation stemming from collaborative behaviour (Hwang, 2020 ); a simulation model investigating how different policy interventions contribute to the use of electric vehicles and the use of renewable energy sources to recharge them (van der Kam et al., 2019 ); a simulation of people's meat consumption in Britain and the different impacts of price changes, animal welfare campaigns and health campaigns on people's propensity to consume meat (Scalco et al., 2019 ).
It should be clear, then, that computation-based methods are an important ally for social scientists. Obviously, they are not meant to replace other methods, such as equation-based and statistical models, but rather to complement them, in order to put the massive progress in information technology at the service of research. As a matter of fact, simulation models rely heavily on other methodologies both in the development part and in the analysis of the results. As an example of such practice, Carrella et al. ( 2020 ) discuss the application of linear regularized regression to find the optimal calibration of the model parameters to match the data. Starting from the fact the regression is a well-understood and commonly used method, the authors leverage this knowledge and apply it to the delicate task of parameter estimation. In short, they propose following these four steps: repeatedly run the model with a random vector of K parameters at every simulation; collect M summary statistics for each simulation; train K different regressions using each parameter as a dependent variable and the collected summary statistics as independent variables; finally input the actually observed statistics in the K regressions in order to find the "real" parameters that generated them.
For another example, ten Broeke et al. ( 2016 ) review in great detail the pros and cons of various methodologies to perform sensitivity analysis on agent-based models in terms of different aims. The three methodologies analyzed are: regression-based methods, which decompose the variance of the ABM outcomes by regressing them against the input parameters; the OFAT (one-factor-at-a-time) sensitivity analysis, which looks at the variation in the output when one parameter changes, while all other parameters are kept fixed; the so-called Sobol method, which decomposes the overall variation in the model by attributing fractions of it to individual parameters.
The aims taken into consideration by the authors are the following: to find how patterns and emergent properties are generated within the model; to examine the robustness of emergent properties; to quantify the variability in the outcomes resulting from model parameters.
It is clear, then, that agent-based models, as well as simulation models in general, do not represent an alternative to other more traditional research methods. Quite to the contrary, their potential is fully exploited when they are used in combination with other methods.
The role of simulation in language policy
In light of the considerations made throughout this paper, it should be clear that language-related phenomena unfold in a complex environment. Indeed, as was seen, language issues are never just language issues, which is why they should always be studied from an interdisciplinary perspective. It is enough to look at some recent publications in the field of language policy and language economics to realize how language matters are strictly connected to numerous seemingly unrelated areas. 6 For example: Golesorkhi et al. ( 2019 ) examine the relationship between language use and financial performance of microfinance banks; Civico ( 2019 ) discusses the use of language policy to serve socio-political objectives throughout the twentieth and twenty-first centuries in China; Kang ( 2020 ) analyses the changes in North Korea’s language policy and attitudes towards the English language following the rise to power of Kim Jong-un.
To give an idea of the numerous ways in which language matters can be articulated, Grin et al. (2018) address 72 questions concerning languages organized in six different sections. The topics included range from language policy analysis to linguistic diversity and language education. The collection of questions was addressed by teams of people having different disciplinary backgrounds, ranging from economics, mathematics and philosophy, to education, sociolinguistics and law. This comprehensive approach stems from the realization that language issues are all interrelated and exist in a greater system. Issues such as language teaching, the provision of language services, the protection of minority languages and the official adoption of a language all influence and are influenced by each other. In light of all this, a complex perspective on language matters becomes crucial if one hopes to gain more complete and deeper insights. Ideally, this would be achieved by setting up large-scale studies involving numerous people with different disciplinary backgrounds. However, this is not always possible, in that it calls not only for a conjoint and coordinated effort, but also for substantial financial support. This is where computational modelling comes in particularly handy. Thanks to their flexibility and capacity to integrate knowledge from various fields, simulation models allow us to gain insights into dynamics that would otherwise be unobservable. Considering the converging evidence about the complexity of language matters adduced throughout this dissertation, it seems reasonable to conclude that, as is the case for many other fields, language policy can benefit from the application of a complex approach. An optimal implementation and evaluation of language policies (as well as policies in general) requires a large amount of data and, ideally, direct observation of the impact. However, this is not often possible, and in many cases, it is not advisable to implement a measure just for the sake of observing its effect. Agent-based modelling offers a natural solution to such problems. It can help language policy makers in at least three different ways: by simulating existing phenomena to gain insights about the matter under study (such as the development of different communication strategies); by providing an assessment of the potential impact of different measures (for example, investigating how an increase in the average level of fluency in a minority language affects the number of speakers over time); by simulating the changes in the system caused by exogenous shocks (such as the impact of a sudden wave of immigration on the linguistic landscape).
All these objectives can be achieved by policy makers by drawing from and building on the already vast amount of qualitative research on language matters. Indeed, agent-based modelling is a natural extension of qualitative studies. Besides, as I will discuss in the next section, ABMs do not need to rely exclusively on social theories to provide agents with realistic behavioural rules. Agent-based modelling is very flexible and can be easily combined with other more qualitative-oriented methodologies.
Among the virtues of agent-based modelling, I highlighted aspects such as flexibility, adaptability, effective visualization, ease of programming and immediate usability by both experts and non-experts. However, I would like to mention another strength of ABMs, one that may speak especially to policy makers, i.e., their ability to capture potential unintended consequences of policy measures. Unexpected or unintended effects of policies are rather common and discussed at length in the relevant literature. For example, Bernauer and Knill ( 2009 ) investigate the case of a German packaging waste policy that turned out to be ineffective soon after its implementation and that proved very hard to dismantle. Unintended consequences usually result from a combination of complexity and lack of information that limits policy makers' understanding of the policy (Lindkvist et al., 2020 ). They can be frustrating, confusing, and time-wasting. Most importantly, unintended consequences can be costly.
The issue of unintended consequences is crucial for policy makers, who are often reluctant to put in place costly large-scale policy measures on the basis of theory-based models that can only be verified after implementation. However, ABMs can provide a risk-free environment in which policy makers can experiment with different measures. Indeed, if developed with sufficient attention to social and behavioural mechanisms, an ABM can highlight some unexpected or unintended dynamics thanks to its integrated multi-scale environment. In practice, this would amount to saving a non-negligible amount of money that would have otherwise been invested either in testing practically the theory-based policies or in developing and implementing measures aimed at fixing or even reverting the unintended effects of the policy. Therefore, the integration of agent-based modelling in the current policy making process can result in non-negligible savings, better resource allocation and generally improved governance.
Potential applications
Agent-based modelling and fuzzy logic
In an attempt to increase the level of realism of simulation models, some authors have suggested combining agent-based modelling with fuzzy logic as a further extension to the use of natural language data to inform agents. Fuzzy sets are sets in which elements have a "certain degree" of membership. Differently from Boolean logic, in which membership may only take one of two values, i.e., 0 (not a member) or 1 (member), in a fuzzy framework, an element can belong to a set with a varying degree of intensity. In short, each member of the set takes on a "grade of membership" that ranges from 0 (not a member) to 1 (full member). All members taking on values in between are "partial" members. Fuzzy logic is able to capture the vagueness and uncertainty which often guide human behaviour. In the perspective of using textual data to define the behaviour of artificial agents, being able to discern the intensity of people's attitude with respect to specific facts can be crucial. Besides, also the extrapolation of agents' properties from text data can greatly benefit from the use of an approach that can deal with uncertainty. After all, humans constantly function with a certain degree of uncertainty. Izquierdo et al. ( 2015 ) propose the following example. Consider the sentence "a tall, blonde, middle-aged guy with long hair and casually dressed is waiting for you at (sic) the lobby". While a human can more or less easily figure out who the concerned person is in a group of people (or at least narrow down the selection to a number of elements), implementing an equally effective artificial agent can be extremely challenging. The reason is that concepts such as "tall" or "middle-aged" are not clear-cut, but fuzzy. Indeed, it would be ridiculous to impose a threshold above which a person is "tall" and one who is a few millimetres shorter is not.
In this context, fuzzy logic tries to cope with the fact that computers cannot match the natural ability of humans to deal well with imprecise information. Fuzzy logic represents a step further in the treatment of a concept that was stressed many times throughout this research work, i.e., the heterogeneity of agents. As said, the agents in a system might belong to the same class but they might differ slightly or significantly in their characteristics. This includes the subjective way agents might perceive and respond to certain properties of the system. Consider the following example, partially inspired by Izquierdo et al. ( 2015 ). Let us imagine a system of reading recommendations for language learners. The objective of the system is to recommend to individual language learners a set of readings whose difficulty matches their level of fluency. This could be achieved by asking readers to rate the readings in terms of difficulty (for example on a scale from 1 to 10, where 1 is "extremely easy" and 10 is "extremely difficult"), along with their self-assessed level of fluency (say, from basic to advanced). Ideally, this system would be recommending readings that are accessible to readers of the same level. However, such a system (or at least, a system of this sort that is accurate and consistent) is very hard to implement. One of the reasons is that users are faced with a number of fuzzy concepts at various levels. For example, users usually have different understandings of the words "easy" and "difficult". Words like "extremely" might be interpreted as carrying different amounts of intensity. When asked to self-assess their level of fluency, users might have radically different understandings of what it means to have basic knowledge of a language or being fluent in it. 7 Consequently, one might consider implementing a framework of shared definitions to correctly assess users' evaluations. Such a framework might have varying degrees of precision. On the one side, very general definitions are quick and easy to handle for users but might not lead to any significant improvement. On the other, a long list of very detailed descriptions could be cumbersome and discourage users. In order to find the appropriate level of detail, one might consider simulating the recommendation system in a computer environment. The model would reproduce individual users, each with their own (randomly assigned) level of fluency and understanding of the various concepts mentioned above. 8 The agents would then be presented with readings (whose difficulty is exogenously determined) and asked to rate them selecting from a list of descriptive words, according to their interpretation of these words. The implementation of a framework of shared definitions would be represented by a lower or greater variation of these understandings among users. The objective of the model would be to determine to what extent leaving room for the concepts mentioned above to be fuzzy causes a mismatch in the recommendations and the actual level of users. This way it is possible to determine how precise the framework of shared definitions should be, in order to find an optimal compromise between a superficial set of indications and a cumbersome and tedious list of descriptions.
Natural language processing and machine translation
As was said many times, agent-based models are highly dependent on reliable and accurate decision rules for agents. Grounding these rules in qualitative and empirical studies has often proved a successful though labor-intensive practice, because it is based on the direct observation of human behaviour. Recently, an increasing number of authors have suggested leveraging the vast amount of text data available today to model human cognition. Padilla et al. ( 2019 ), for example, suggest using natural language processing (NLP) to analyse the description of social phenomena in order to extract potential ABM specifications from unstructured narratives. This would allow to bridge the gap between simulation experts (those who have the technical skills to develop ABMs) and domain experts (those who provide information about the phenomenon at stake in order to inform the model). Another interesting idea is the use of NLP to model the role of associations in judgment and decision-making, which represents a major challenge in the creation of realistic agents. Bhatia ( 2017 ) notes that, through associations, individuals are able to process co-occurrences and statistical regularities on the basis of their past experiences in a relatively fast and effortless way. Such evaluations, whether correct or not, play a central role in the individual decision of the behavioural response to a stimulus. He proposes using word embeddings (vector-based representations of words) to generate realistic agents. 9 He discusses how often people fall in the so-called "conjunction fallacy", a common cognitive fallacy occurring when the joint probability of a set of conditions is erroneously believed to be higher than the probability of a single general one. This fallacy, first discussed by Tversky and Kahneman ( 1983 ), is due to the fact that a more detailed description of an event (for example, an individual having a certain profile) can deceptively seem more "representative" of the population from which it is drawn and hence more likely. Bhatia ( 2017 ) argues that an idea or a situation (such as a stimulus) can be represented as a vector of the words that make up its description in terms of a given number of dimensions. The same can be done to represent possible reactions (such as the potential behavioural responses to a stimulus). One can then calculate the distance between these vector space representations (usually as the cosine of the angle for each pair of vectors) to determine the most likely reaction to an input. Following this line of reasoning, Runck et al. ( 2019 ) argue that word embeddings are able to capture people's cognitive biases and therefore reproduce more realistic behaviours. Therefore, the authors point to the fact that informing agents in this way may help overcome the too often fallacious assumption that agents behave rationally.
In addition to its application to ABMs, the use of natural language processing is very promising in the context of policy making. Many applications of NLP to policy issues were proposed in the past few years, such as using machine translation and fixed-phrase translators in emergency settings to enable communication between medical staff and refugee patients when they cannot communicate in a common language (Spechbach et al., 2019). Many other examples can be discussed. For example, one could wonder about the role of language-related computer-based methods, such as machine translation, in the provision of multilingual services. One might hastily conclude that, in an ideal future, a sufficiently advanced machine translation system might be the key to a world free of language barriers. In the context of minority language protection, being able to provide accurate translation across various languages could be seen as a measure in support of the diffusion of less spoken languages. Besides, in this ideal context, there would be no particular pressure to acquire skills in a more spoken language, in that one could simply rely on machine translation. One might even go as far as to say that reliable machine translation would make language rights (a form of human rights specifically concerning languages and their use) obsolete. Indeed, in an ideal (admittedly, sci-fi) scenario in which one could, say, wear a device that provides highly accurate translation of spoken and written language on the spot, one could easily live one's life in one's own native language. 10 However, the discussion would not be so simple. One could even argue that machine translation might actually work against minority languages. As a matter of fact, artificial intelligence (and, consequently, all machine-mediated services) improves significantly when it is trained with an increasing amount of information. Given that, as of today, the availability of corpora to train machine translation systems is strongly skewed towards a very limited number of languages, extremely accurate machine-based translations are very unlikely to exist for all language pairs. As a consequence, translation between some specific pairs of languages would be much more accurate than other combinations. Eventually, machine translation would simply incorporate and transpose to a virtual context the already existing bias that favours widely spoken languages. These as well as other considerations are challenges that will have to be faced by policy makers willing to exploit the potential of computer-based methods. | Funding
Open access funding provided by University of Geneva.
Data availability
The manuscript has no associated data.
Declarations
Conflict of interest
The corresponding author states that there is no conflict of interest. | CC BY | no | 2024-01-16 23:35:01 | SN Soc Sci. 2021 Aug 2; 1(8):197 | oa_package/eb/8e/PMC8550499.tar.gz |
|||
PMC9002620 | 35406088 | 1. Introduction
Homocysteine (Hcy) is a sulfhydryl-containing amino acid that is produced when methionine is demethylated. Hcy can be converted to cysteine via the sulphuration pathway, or remethylated utilizing methyltetrahydrofolate or betaine [ 1 ].
The clinical definition of hyperhomocysteinemia (HHcy) is a total plasma Hcy above 15 μmol/L. Mild HHcy is induced by a diet lacking in Hcy-lowering vitamins, such as folate, vitamin B6, and/or vitamin B12 [ 1 ]. Experiments on induced HHcy (for example, in a rat model induced by methionine dietary overload [ 2 ] or by oral L-methionine or subcutaneous DL-Hcy administration [ 3 ]) allow for more research into the links between HHcy and other inflammatory illnesses, as well as the processes that underpin these connections.
Oxidative stress is among the processes thought to be involved in the pathophysiology of the damage produced by HHcy [ 4 ]. The main consequence of reactive oxygen species (ROS)-induced oxidative stress is to trigger inflammatory responses, which are mediated mostly by nuclear factor NF-kB. Some researchers believe that Hcy causes atherosclerosis, either by directly harming the endothelium or by modifying the oxidative state of the endothelium. The formation of ROS such as hydrogen peroxide, superoxide anions, and hydroxyl radicals contributes to endothelial Hcy-mediated cytotoxicity [ 5 ] during the autooxidation of Hcy to homocysteine or other mixed disulfides [ 1 , 6 ].
Data have revealed that HHcy may be linked to disorders affecting other organs [ 7 ]. The influence of HHcy on organs’ oxidative state has been examined in diverse tissues such as the endothelium [ 8 ], liver [ 9 ], heart, and brain [ 10 , 11 ]. Hcy may also have pathogenetic implications in inflammatory bowel disease (IBD), demonstrating that it is a pro-inflammatory and immunostimulating molecule [ 12 ]. Hcy is thought to stimulate the generation of hydroxyl radicals, which leads to lipid peroxidation (LPO). Proteins and carbohydrates can be damaged by free radicals and LPO products such as 4-hydroxy-2-nonenal and malondialdehyde (MDA), a prominent end product of LPO [ 13 , 14 ]. Superoxide anions can also react quickly with nitric oxide (NO) to generate peroxynitrite, a highly reactive oxidant that can cause tissue damage [ 15 , 16 ]. Hcy can also decrease the expression of antioxidant enzymes like glutathione GSH peroxidase (GPx), which could help to eliminate the destructive effects of ROS [ 17 ]. The antioxidant proteins thioredoxin and heme oxygenase-1 (HO-1) are substantially downregulated by Hcy [ 18 ]. Several dietary components have been demonstrated to reduce the impact of HHcy, including folic acid and vitamins B6 and B12 [ 13 ]. In any case, health-care practitioners should create effective prevention and intervention strategies to combat this condition.
For many decades plants have been employed to heal human sicknesses. The cashew tree ( Anacardium occidentale L.) is a Brazilian native that is now widely planted around the world. Cashew nuts, when consumed as part of a well-balanced diet, can help to reduce the risk of cardiovascular disease, particularly stroke, as well as metabolic syndrome [ 19 ]. An earlier study looked at the effects of dietary supplementation with industrial processing by-products like cashew ( Anacardium occidentale L.) fruit on the intestinal health and lipid metabolism of rats with diet-induced dyslipidemia [ 20 ]. Nuts are regarded as an important part of a balanced diet, since they include protein, good fatty acids, and critical nutrients [ 21 ]. Natural antioxidants, such as polyphenol-rich meals, fresh fruits, and vegetables, may be able to counteract ROS oxidative degradation [ 22 ]. Unsaturated fatty acids (UFAs) such as oleic (-9) and linoleic (-6) acid, flavonoids, anthocyanins, tannins, fiber, folate, and tocopherols are abundant in cashew nuts [ 23 ]. The cashew nut and its derivatives have a wide range of biological capabilities, including antioxidant and antibacterial qualities [ 24 ]. Several animal studies have shown that cashew nuts have antioxidant and protective properties in the treatment of several inflammatory syndromes [ 25 , 26 , 27 ].
Based on these findings, and in particular considering the content of folate and flavonoids present in cashew nuts, the aim of this study was to investigate the anti-inflammatory and antioxidant effects of oral administration of cashew nuts in a rat model of HHcy induced by L-methionine oral injection. | 2. Materials and Methods
2.1. Animals
Male Sprague Dawley rats (250 g, Envigo, Milan, Italy) were housed in a well-organized environment (room 22.1 °C, 12 h dark–light cycles) and fed normal rodent food and water. For one week, the animals were acclimatized to these conditions. The research was approved by the Animal Welfare Review Board at Messina University, protocol number: n° 897/2021-PR. All animal studies follow new Italian legislation (D.Lgs 2014/26), as well as EU Regulations (EU Directive 2010/63).
2.2. Cashew Nuts’ Nutritional Composition
The cashew kernel samples ( Anacardium occidentale L.) were obtained from the Ivory Coast; per 100 g, they contained 5.40 g moisture, 22.46 g protein, 44.19 g total lipids, 4.48 g total dietary fiber, 30.95 g total sugars, 2.68 g ash, and 80.01 mg total phenols. The nutritional composition was analyzed according to the Association of Official Analytical Chemists (AOAC) Official Method, as previously reported [ 25 ]. The total content of folate in cashew nuts is 25 μg/100 mg ( https://fdc.nal.usda.gov/fdc-app.html#/food-details/170162/nutrients , accessed on 5 March 2022).
2.3. Animal Model Induction
Hyperhomocysteinemia was induced in male rats by methionine administration (meth) (1 g/kg, oral, 30 days) dissolved in drinking water [ 28 ].
2.4. Experimental Groups
The animals were randomly divided into groups and treated as follows:
Sham+vehicle: rats received only normal saline (instead of methionine) and were treated orally with saline for 30 days;
Sham+cashew nuts: rats received only normal saline (instead of methionine) and were treated orally with cashew nuts (100 mg/kg, oral) for 30 days;
Meth+vehicle: rats received methionine (1 g/kg, oral) for 30 days and were treated with saline;
Meth+cashew nuts: rats were subjected daily to methionine and received cashew nuts (100 mg/kg, oral) for 30 days.
Doses of cashew nuts were chosen based on previous studies [ 27 ].
Since no significant difference was found between the sham+vehicle and sham+cashew nuts groups, only data regarding the sham+vehicle groups were shown. At the end of experiment (30 days), the animals were sacrificed. Blood, colon, liver, and kidney tissues were collected from all groups.
2.5. Biochemical Analyses
Serum levels of Hcy were assessed using a commercially available kit for HPLC measurements (Bio-Rad, Milan, Italy), according to the manufacturer’s instructions. The serum concentration of total cholesterol was assessed using a commercially available kit (Byosistems, Reagents and Instruments, Barcelona, Spain) by means of an automated analyzer UV spectrophotometer (model Slim SEAC, Florence, Italy). All sera were also analyzed for determination of the following parameters: aspartate transaminase AST, alanine aminotransferase ALT, lactate dehydrogenase LDH, and alkaline phosphatase ALP, using commercial kits (Abcam, Milan, Italy). The plasma creatinine concentrations were assayed as previously indicated [ 29 , 30 ].
2.6. Antioxidant Levels
The levels of superoxide dismutase SOD, glutathione GSH, and catalase CAT were assayed in the blood, according to the manufacturer’s instructions (Cusabio Biotech Co., Ltd., Wuhan, China) [ 23 ].
2.7. Malondialdehyde (MDA) Measurement
Plasma malondialdehyde (MDA) levels were determined as an indicator of lipid peroxidation, as indicated [ 31 ]. A total of 100 μL of plasma was added to a mix of 200 μL of 8.1% SDS, 1500 μL of 20% acetic acid (pH 3.5), 1500 μL of 0.8% thiobarbituric acid, and 700 μL distilled water. Samples were then warmed for 1 h at 95 °C and centrifuged at 3000× g for 10 min. The absorbance was detected at 650 nm.
2.8. Cytokine Measurement
Plasma tumor necrosis factor alpha (TNF-α) and interleukin (IL-1β) were assessed using ELISA kits provided by R&D Systems, Minneapolis, MN, USA.
2.9. Histological Examination
For histological analysis, tissues were subjected to hematoxylin and eosin staining and observed by competent pathologists using a Leica DM6 microscope (Leica Microsystems SpA, Milan, Italy), associated with Leica LAS X Navigator software (Leica Microsystems SpA, Milan, Italy). Histological injuries were scored as previously reported [ 12 , 32 , 33 ]. Paraffin-embedded skin tissues with a thickness of 5 μm were stained with Masson’s trichrome, according to the manufacturer’s protocol (Bio-Optica, Milan, Italy) [ 34 , 35 ].
2.10. Immunohistochemical Localization of Poly (ADP-Ribose Polymerase) (PARP), Nitrotyrosine
Immunohistochemical analysis was performed as previously described [ 36 , 37 , 38 ]. The sections were incubated overnight with primary antibodies: anti-PARP mouse polyclonal antibody (1:100 in PBS, v / v , Santa Cruz Biotechnology (SCB), and anti-nitrotyrosine rabbit polyclonal antibody (1:200 in PBS, v / v , Millipore). Sections were cleaned with PBS and then treated as indicated previously [ 36 ]. Five stained sections from each mouse were scored in a blinded fashion and observed using a Leica DM6 microscope (Leica Microsystems SpA, Milan, Italy) following a typical procedure [ 39 ]. The histogram profile was related to the positive pixel intensity value obtained [ 40 ].
2.11. Western Blots for Nuclear Factor NF-kB, NRF-2 and HO-1, and Bax and Bcl-2
Cytosolic and nuclear extracts were prepared as previously described [ 41 ]. The following primary antibodies were used: anti-NF-kB (SCB; 1:500 #sc8008), anti-NRF-2 (sc-365949, 1:1000, SCB), anti-HO-1 (sc-136960, 1:1000 SCB), anti-Bcl-2 (SCB, sc-7382), anti-Bax (SCB, sc-7480), in phosphate-buffered saline, 5% w / v non-fat dried milk, and 0.1% Tween-20 at 4 °C overnight. Membranes were incubated with peroxidase-conjugated bovine anti-mouse IgG secondary antibody or peroxidase-conjugated goat anti-rabbit IgG (Jackson ImmunoResearch, West Grove, PA, USA; 1:2000) for 1 h at room temperature. Anti-β-actin or anti-lamin A/C antibodies were used as controls. The expression of protein bands was detected by a procedure previously described [ 41 ]. To establish that the blots were loaded with identical volumes of lysate, they were also probed with anti-β-actin or anti-lamin A/C antibodies. Comparative expression of the protein bands was identified with a chemiluminescence detection procedure, following the manufacturer’s instructions (Super Signal West Pico Chemiluminescent Substrate; Pierce). The expression of protein bands was computed by densitometry with BIORAD ChemiDocTM XRS+software and standardized to β-actin or lamin A/C levels. Images of blot signals were imported to analysis software (Image Quant TL, v2003).
2.12. Terminal Deoxynucleotidyl Nick-End Labeling (TUNEL) Assay
Apoptosis was analyzed by a TUNEL assay using a cell death detection kit. TUNEL staining for apoptotic cell nuclei was performed as previously described [ 42 ].
2.13. Materials
All chemicals were analytical grade or higher. Methionine was purchased from Sigma Chemical (St. Louis, MO, USA).
2.14. Statistical Evaluation
All results are given as the mean standard error of the mean (SEM) of N observations. N = animal number. For histology/immunohistochemistry, at least three separate experiments resulted in the images. A p value of <0.05 was considered to indicate significance. For multiple comparisons, a one-way ANOVA was employed, followed by a Bonferroni post-hoc test. | 3. Results
3.1. Effect of Cashew Nuts on Serum Hcy Levels after Methionine Administration
In the present study, to see whether the oral administration of methionine effectively caused the condition HHcy, we measured the levels of Hcy in the serum. Methionine administration (1 g/kg oral) for 30 days caused an increased level of Hcy compared to the sham group ( Figure 1 A). However, the cashew nut treatment was not able to directly reduce the elevated serum levels of Hcy ( Figure 1 A).
Thus, the treatment with cashew nuts did not act directly on the reduction of serum Hcy levels, but may have reduced inflammation and oxidative stress due to the HHcy condition.
3.2. Effect of Cashew Nut Oral Administration on Biochemical Changes Induced by HHcy in Rats
To analyze the clinical effects of HHcy, we measured biomarkers to evaluate the lipidic profile (total cholesterol), and the functioning of the liver (ALT, AST, ALP, LDH) and kidneys (creatinine). An increase in serum total cholesterol, ALT, AST, ALP, and LDH concentrations was observed in L-methionine-induced HHcy rats compared to the sham group ( Figure 1 B–F). Cashew nut treatment caused a decrease in elevated serum total cholesterol, ALT, AST, ALP, and LDH concentrations when compared with the L-methionine administered group ( Figure 1 B–F). In addition, an increase in plasma creatinine was also observed in meth-subjected animals compared to the sham group ( Figure 1 G). Cashew nut treatment was able to reduce creatinine levels ( Figure 1 G).
3.3. Effect of Cashew Nut Oral Administration on Oxidative Stress Induced by HHcy in Rats
A decrease in serum levels of SOD, CAT, and GSH was found in HHcy-subjected rats compared to the sham groups ( Figure 2 A–C). Oral administration of cashew nuts significantly increased all of the parameter levels ( Figure 2 A–C). In addition, an increase in MDA levels, an indicator of lipid peroxidation, was observed ( Figure 2 D). Cashew nuts reduced plasma MDA levels in a significant way ( Figure 2 D).
3.4. Effect of Cashew Nut Oral Administration on Cytokine Release Induced by HHcy in Rats
HHcy induced by oral methionine administration caused an increase in plasma cytokine release, specifically TNF-α and IL-1β, in HHcy vehicle rats compared to the controls ( Figure 2 E,F). Cashew nut treatment was able to decrease pro-inflammatory cytokine release in a significant way ( Figure 2 E,F).
3.5. Effect of Cashew Nut Oral Administration on Histological Damage and Fibrosis Induced by HHcy in Rats
Oral methionine administration caused an important histological alteration in the kidney, colon, and liver tissues, with necrosis, inflammation, and cellular infiltrate observed ( Figure 3 D–F, and see scores Figure 3 L–N) compared to the sham group ( Figure 3 A–C, and see scores Figure 3 L–N). Cashew nut treatment significantly reduced the histological injury in all tissues ( Figure 3 G–I, and see scores Figure 3 L–N). Additionally, Masson’s trichrome was performed to evaluate the fibrotic process by deposition of collagen in the liver, colon, and kidney tissues. This stain showed an increase of collagen formation in HHcy rats treated with the vehicle ( Figure 4 D–F) compared to the sham group ( Figure 4 A–C). Cashew nuts reduced collagen formation in all tissues ( Figure 4 G–I).
3.6. Effect of Cashew Nut Oral Administration on Nitrotyrosine and PARP in HHcy Rats
The expression of nitrotyrosine, a specific indicator of nitrosative stress, and PARP, an indicator of DNA breakdown, was analyzed by immunohistochemical staining. Sections of colon, liver, and kidney tissues from the sham rats did not show marks for nitrotyrosine ( Figure 5 A–C and see Figure 5 L–N), whereas sections from the HHcy rats demonstrated a robust positive staining for nitrotyrosine ( Figure 5 D–F, and see Figure 5 L–N). In addition, increased PARP-positive staining was also observed in tissues from the HHcy rats ( Figure 6 D–F, and see Figure 6 L–N) compared to the sham group ( Figure 6 A–C and see Figure 6 L–N). Oral treatment with cashew nuts at 100 mg/kg significantly reduced positive staining for nitrotyrosine and PARP in all tissues ( Figure 5 G–I and Figure 6 G–I, and see Figure 5 L–N and Figure 6 L–N).
3.7. Effect of Cashew Nut Oral Administration on NF-kB, NRF-2, and HO-1 Expression in HHcy Rats
To better investigate whether, in HHcy, cashew nuts may act by interacting with signaling pathways such as nuclear NF-kB or Nrf-2/ HO-1, Western blots for the NF-kB and NRF-2/HO-1 pathways were also performed with liver, kidney, and colon tissues. Increased nuclear NF-kB and reduced Nrf-2 expression were observed in response to HHcy intervention with respect to the sham animals ( Figure 7 A,A1,A2,B,B1,B2,C,C1,C2). Cashew nuts significantly reduced the level of nuclear NF-kB, as well as upregulating Nrf-2 compared with the HHcy vehicle group, suggesting that cashew nuts diminish nuclear translocation of NF-kB and increase Nrf-2 ( Figure 7 A,A1,A2,B,B1,B2,C,C1,C2). At the same time, Western blot analysis showed that cashew nut treatment significantly enhanced the HHcy-induced decrease in HO-1 protein expression ( Figure 7 A,A3,B,B3,C,C3).
3.8. Effect of Cashew Nut Oral Administration on Apoptosis in HHcy Rats
In order to assess whether the damage of HHcy induced by methionine was also associated with apoptosis, we performed a TUNEL assay and Western blot analyses. In the liver, kidney, and colon sections, the TUNEL assay was utilized to determine how many cells were experiencing apoptosis. A low level of TUNEL-positive staining was detected in the sham group ( Figure 8 A–C, and see Figure 8 L–N), whereas a significantly increased number of TUNEL-positive cells were observed in the HHcy rats ( Figure 8 D–F, and see Figure 8 L–N). Administration of cashew nuts reduced the number of TUNEL-positive cells ( Figure 8 G–I, and see Figure 8 L–N). Methionine administration also significantly increased the expression of Bax (pro-apoptotic) and decreased that of Bcl-2 (anti-apoptotic) ( Figure 9 A,A1,A2,B,B1,B2,C,C1,C2). Cashew nut treatment significantly downregulated HHcy-induced Bax expression and significantly increased the levels of Bcl-2, exerting a significant anti-apoptotic effect ( Figure 9 A,A1,A2,B,B1,B2,C,C1,C2). | 4. Discussion
HHcy is a methionine metabolism abnormality that can cause a variety of disorders in humans, including cardiovascular and neurological conditions like atherosclerosis and stroke; inflammatory syndromes, including osteoporosis and rheumatism; and Alzheimer’s and Parkinson’s diseases. Meth’s biochemistry is tightly controlled by a number of enzymes that regulate Hcy levels. Certainly, the cell’s well-being depends on balanced enzyme activity, and its failure might result in an increase in the Hcy concentration, which could contribute to the start of a variety of pathological disorders. HHcy may be a disorder involving the dysfunction of more organs, such as the kidney, liver, or gut, which are currently poorly understood, putting a greater emphasis on the need to invest in research. HHcy caused by dietary folate restriction promotes oxidative stress in the kidneys or generates ROS release, inflammatory infiltration, and fibrosis, and lowers the glycogen/glycoprotein concentration in the liver of rats [ 1 , 43 ]. Several reports have also revealed that high blood Hcy levels are a major risk factor for chronic kidney disease [ 44 ]. A calibrated assumption of correct vitamin doses, such as folate, vitamin B6, vitamin B12, and betaine, may be effective in controlling HHcy-related diseases [ 45 , 46 ]; as a result, daily consumption of these micronutrients must be examined. Because nuts include proteins, healthy fatty acids, and critical elements, a daily balanced consumption of nuts is important for good health. The oxidative effect of ROS could be neutralized by eating polyphenol-rich foods, fresh fruits, and vegetables. The cashew nut is one of the four most well-known nuts in the world, thanks to its high nutritional value and unique flavor. Several of our studies have demonstrated the beneficial effects of cashew nuts in different inflammatory experimental models, in particular in colitis, pancreatitis, ischemia, and osteoarthritis [ 23 , 25 , 26 , 27 , 47 ]. Studying the status of HHcy in animal models could help researchers to better understand the mechanisms underlying the disease, and create effective preventative and intervention strategies for HHcy-induced tissue alterations [ 48 ]. L-methionine-treated rats are typically used to examine HHcy and its downstream consequences. Because high concentrations of Hcy can harm cells and play a role in the production and progression of tissue damage, substances that can reduce oxidative stress may be useful in this process; cashew nuts may be one food with this ability.
Based on this, and considering the content of folate and flavonoids present, the aim of this work was to evaluate the anti-inflammatory and antioxidant effects of cashew nuts in HHcy and examine the possible pathways involved. In HHcy rats, oral administration of cashew nuts (100 mg/kg) was able to counteract clinical biochemical changes, oxidative and nitrosative stress, reduced antioxidant enzyme levels, lipid peroxidation, proinflammatory cytokine release, and histological tissue injuries, fibrosis, and apoptosis, respectively, in the kidney, colon, and liver. Our findings are consistent with prior research that found methionine supplementation increased plasma Hcy levels, induced oxidative stress, and decreased antioxidative enzyme activity [ 49 ]. HHcy has been shown to cause mitochondrial dysfunction through regulating oxidative stress [ 50 , 51 ], and may promote the generation of H 2 O 2 and hydroxyl radicals via the autoxidation of sulfhydryl (-SH) groups or by decreasing the intracellular levels of GSH, which is implicated in the abolition of free radicals [ 52 ]. Hcy-induced reductions in Nrf2 expression, as well as reduced antioxidant enzyme expression/activity and increased ROS generation, have been widely reported [ 51 ]. In response to various types of stimulation, such as oxidative stress, Nrf2 is the major transcriptional activator of the HO-1 gene [ 53 ]. The Nrf2/HO-1 pathway can protect cells from oxidative stress-induced damage when it is activated [ 54 ]. Antioxidants are strong activators of Nrf2 because, after metabolism, they generate a minor quantity of oxidative stress that supports Nrf2 activation [ 55 , 56 ]. High levels of Hcy significantly repress HO-1 mRNA and protein expression in HepG2 cells [ 49 ]. Inhibition of SOD activity is also one of the mechanisms of Hcy-induced oxidative stress [ 13 ]. Derouiche et al. discovered that Hcy inhibits SOD and CAT activities in rats [ 57 ]. In the present study, we found that cashew nut treatment was able to promote Nrf2 nuclear translocation and to induce the expression of Nrf2, and regulated factors such as HO-1 in all tissues, along with increasing the serum levels of SOD, GSH, and CAT. This is in agreement with previous studies in which cashew nuts were able to activate the NRF2/HO-1 pathway [ 26 ].
It has also been widely reported that HHcy induces acute and chronic inflammatory events via NF-κB regulation [ 12 ]. In a neuroblastoma cell line, an induced high level of Hcy was demonstrated to boost NF-kB levels, and this was inhibited by the introduction of antioxidants [ 58 ]. HHcy was also reported to promote the production of IL-1β and TNF-α by human peripheral blood monocytes [ 59 , 60 ]. A previous study reported that in inflamed lungs, animals given anacardic acids from cashew nuts had lower levels of neutrophils and TNF, respectively [ 61 ]. In addition, cashew nut administration reduced the levels of cytokines in other animal models [ 23 , 25 , 26 , 27 , 47 ]. The release of proinflammatory cytokines is regulated by intracellular signal transduction, such as the NF-κB pathway. In this study, we demonstrated that cashew nuts cause a reduction of NF-κB expression, as well as pro-inflammatory TNF-α and IL-1β levels. Peroxynitrite is a highly reactive oxidant that damages cells by altering lipids, proteins, and DNA. A considerable increase in the amounts of lipid peroxides and nitrotyrosine protein adducts in hyperhomocysteinemic rats was previously observed [ 62 . Based on these findings, in this study we demonstrated that oral treatment with cashew nuts was able to significantly reduce lipid peroxidation by altering plasma MDA levels, nitrotyrosine production, and PARP activation in all tissues.
The link between ROS and apoptosis is well known. Previous research has found that excessive Hcy levels cause cardiomyocyte apoptosis or necrosis by increasing oxidant stress [ 63 ]. Furthermore, Hcy administration increases the levels of many pro-apoptotic markers, such Bax, p53, and caspase-3, implying an association between HHcy-induced cell damage and NF-kB activation [ 58 ]. Here, we also demonstrated an increased expression of proapoptotic protein Bax and decreased expression of antiapoptotic Bcl-2 by Western blot analysis, as well as an augmented presence of apoptotic fragments by TUNEL assay detection in all tissues in HHcy rats. Cashew nut treatment was able to reduce the apoptotic process.
In conclusion, cashew nuts were able to ameliorate tissue inflammation and oxidative stress, possibly through the regulation of ROS-induced signaling, such as nuclear NRF-2 or NF-κB, and increased antioxidant capacity. Thus, the balanced consumption of cashew nuts could be beneficial for inflammatory events associated with HHcy. | These authors contributed equally to this work.
These authors contributed equally to this work.
Hyperhomocysteinemia (HHcy) is a methionine metabolism problem that causes a variety of inflammatory illnesses. Oxidative stress is among the processes thought to be involved in the pathophysiology of the damage produced by HHcy. HHcy is likely to involve the dysfunction of several organs, such as the kidney, liver, or gut, which are currently poorly understood. Nuts are regarded as an important part of a balanced diet since they include protein, good fatty acids, and critical nutrients. The aim of this work was to evaluate the anti-inflammatory and antioxidant effects of cashew nuts in HHcy induced by oral methionine administration for 30 days, and to examine the possible pathways involved. In HHcy rats, cashew nuts (100 mg/kg orally, daily) were able to counteract clinical biochemical changes, oxidative and nitrosative stress, reduced antioxidant enzyme levels, lipid peroxidation, proinflammatory cytokine release, histological tissue injuries, and apoptosis in the kidney, colon, and liver, possibly by the modulation of the antioxidant nuclear factor erythroid 2–related factor 2 NRF-2 and inflammatory nuclear factor NF-kB pathways. Thus, the results suggest that the consumption of cashew nuts may be beneficial for the treatment of inflammatory conditions associated with HHcy. | Acknowledgments
We would like to acknowledge Salma Seetaroo from Ivorienne de Noix de Cajou S.A. of Cote d’Ivoire for providing the cashew kernel samples from the Ivory Coast. In addition, we would like to thank Valentina Malvagni for editorial support with the manuscript.
Author Contributions
Conceptualization, R.D.P., R.S. and D.I.; data curation, E.G. and R.C.; formal analysis, E.G., R.C., G.M., D.C.; methodology, R.F., A.F.P., M.C., R.D. and T.G.; project administration, S.C.; writing—original draft, R.D., M.C.; writing—review and editing, S.C., R.D.P., R.S. and D.I. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
The animal study protocol was approved by the Institutional Review Board of University of Messina (approval number: n° 897/2021-PR of 11/27/2021). All animal studies follow new Italian legislation (D.Lgs 2014/26), as well as EU Regulations (EU Directive 2010/63).
Informed Consent Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest. | CC BY | no | 2024-01-16 23:35:07 | Nutrients. 2022 Apr 1; 14(7):1474 | oa_package/5c/7a/PMC9002620.tar.gz |
||
PMC9094045 | 35573302 | Introduction
Acetylcholine is an important neurotransmitter for both the maintenance of internal homeostasis and the interaction of individuals with the external environment ( Picciotto et al., 2012 ). Several physiological functions depend on cholinergic transmission, including immunological, endocrine, and neural responses ( Picciotto et al., 2012 ; Cox et al., 2020 ). In the nervous system, cholinergic transmission is ubiquitous, including, for example, peripheral synapses that regulate autonomic and motor responses, and central connections that modulate sensory and cognitive functions ( Huang et al., 2000 ; Miles et al., 2007 ; Zagoraiou et al., 2009 ; Jordan et al., 2014 ; Sourioux et al., 2018 ; Parikh and Bangasser, 2020 ). Due to the immense diversity of neural circuits that depend on cholinergic transmission, the specificity of cholinergic receptors at the synaptic level is essential for the selectivity of their functions.
Cholinergic transmission is mediated via muscarinic and nicotinic receptors, which involve metabotropic and ionotropic signaling, respectively ( Ishii and Kurachi, 2006 ; Hurst et al., 2013 ). Regarding the auditory system, there are efferent pathways connecting the brain with the cochlear receptors, and in the final synapses of these descending circuits, the auditory efferent system (AES) possesses a unique type of cholinergic transmission that has evolved in vertebrates. These connections are mediated by α9/α10 nicotinic acetylcholine receptors (nAChRs), located in the synapses between medial olivocochlear neurons (MOC) and outer hair cells (OHC) of the cochlea ( Elgoyhen et al., 1994 , 2009 ; Delano and Elgoyhen, 2016 ).
In Vetter et al. (1999) generated a strain of mice carrying a null mutation of the Chrna9 gene, giving rise to α9-KO mice, which lack cholinergic transmission between MOC and OHCs. These genetically modified mice provided a unique opportunity to study the role of MOC cholinergic transmission in auditory and cognitive functions.
This article reviews behavioral and physiological studies examining the role of cholinergic MOC synapses in auditory and cognitive functions, emphasizing those performed in α9-KO mice. We also discuss the possible evolutionary role of the auditory efferent system in mammals, probably as a feedback loop to enhance the detection of acoustic signals in noise. Finally, we present evidence that involves the MOC cholinergic transmission in auditory disorders, such as age-related hearing loss. | Conclusion
In conclusion, experimental models such as the Chrna9 KO mouse have allowed the development of multiple lines of auditory research, facilitating substantial advances in the knowledge about AES functioning, providing therapeutic possibilities for the treatment of auditory pathologies. Notwithstanding all the advances that the Chrna9 KO mouse has permitted in the study of auditory physiology, we believe that the development of a time-dependent conditional knock-out is key to the future understanding of AES role in audition and cognition. This type of tool would allow a better control of the possible compensatory effects on embryonic development or neurotransmitter plasticity due to the lack of cholinergic transmission (e.g., GABA), and to rule out the impact of non-neural tissues that are also affected in α9-KO mice. | Edited by: Victoria M. Bajo Lorenzana, University of Oxford, United Kingdom
Reviewed by: Adrian Rodriguez-Contreras, City College of New York (CUNY), United States
This article was submitted to Auditory Cognitive Neuroscience, a section of the journal Frontiers in Neuroscience
Cholinergic transmission is essential for survival and reproduction, as it is involved in several physiological responses. In the auditory system, both ascending and descending auditory pathways are modulated by cholinergic transmission, affecting the perception of sounds. The auditory efferent system is a neuronal network comprised of several feedback loops, including corticofugal and brainstem pathways to the cochlear receptor. The auditory efferent system’s -final and mandatory synapses that connect the brain with the cochlear receptor- involve medial olivocochlear neurons and outer hair cells. A unique cholinergic transmission mediates these synapses through α9/α10 nicotinic receptors. To study this receptor, it was generated a strain of mice carrying a null mutation of the Chrna9 gene (α9-KO mice), lacking cholinergic transmission between medial olivocochlear neurons and outer hair cells, providing a unique opportunity to study the role of medial olivocochlear cholinergic transmission in auditory and cognitive functions. In this article, we review behavioral and physiological studies carried out to research auditory efferent function in the context of audition, cognition, and hearing impairments. Auditory studies have shown that hearing thresholds in the α9-KO mice are normal, while more complex auditory functions, such as frequency selectivity and sound localization, are altered. The corticofugal pathways have been studied in α9-KO mice using behavioral tasks, evidencing a reduced capacity to suppress auditory distractors during visual selective attention. Finally, we discuss the evolutionary role of the auditory efferent system detecting vocalizations in noise and its role in auditory disorders, such as the prevention of age-related hearing loss. | Auditory Efferent System
The auditory efferent system is a neural network that originates in the auditory cortex and projects to multiple subcortical nuclei of the central auditory pathways. These corticofugal pathways generate several feedback loops, including: the (i) collicular-thalamic-cortico-collicular- loop; (ii) cortico-(collicular)-MOC circuit; and (iii) cortico-(collicular)-cochlear nucleus loop ( Terreros and Delano, 2015 ). The most peripheral section of the AES pathways projects from the superior olivary complex in the brainstem to the inner ear and auditory nerve, via MOC and lateral olivocochlear neurons, respectively (AES pathways are summarized in Figure 1A ).
The MOC system appears to be present in all mammals ( Smith et al., 2005 ). Comparative studies suggest that inner ear efferents emerged during evolution from facial branchial motor neurons, which project to the inner ear instead of facial muscles ( Fritzsch and Elliott, 2017 ). Like motor neurons, MOC neurons release acetylcholine as their main neurotransmitter activating nicotinic receptors in the OHCs. Pharmacological studies on MOC-OHC synapses have shown that auditory efferent effects at the cochlear receptor are mainly mediated by the α9/α10 nicotinic cholinergic receptors (nAChRs) located in the basolateral domain of OHCs ( Kujawa et al., 1992 , 1994 ; Rothlin et al., 1999 ; Vetter et al., 1999 ; Elgoyhen et al., 2001 ). The activation of α9/α10 nAChRs by acetylcholine produces an increase of intracellular Ca 2+ concentration, permitting the opening of K + channels (SK2) at the basolateral domain, followed by an outward current that hyperpolarizes the OHCs ( Figure 1B ). The physiological effect of this OHC hyperpolarization is the reduction of basilar membrane motion and an overall cochlear sensitivity decrease ( Elgoyhen and Katz, 2012 ). It is important to emphasize that, given its position at the final synapses of the auditory efferent network, studying the role of the α9/α10 nAChRs is paramount to understanding the AES function.
The α9/α10 Nicotinic Acetylcholine Receptors
The evolutionary history of the nAChRs can be traced back as far as a billion years ( Fritzsch and Elliott, 2017 ), being ancestral even to multicellular animals. During the early evolution of animals, these receptors underwent rapid diversification into several subunits ( Li et al., 2016 ). Specifically, Chrna9 subunits appear to be exclusively associated with vertebrates and its research history formally begins in 1994 ( Elgoyhen et al., 1994 ). This receptor was identified showing a preferential localization in the cochlear hair cells of the vertebrate inner ear ( Elgoyhen et al., 1994 ). In addition, it has also been found in dorsal root ganglia and in other non-neural tissues, i.e., mice lymphocytes and keratinocyte, rat alveolar macrophages, and murine bone marrow cells ( Lips et al., 2002 ; Peng et al., 2004 ; Chernyavsky et al., 2007 ; Colomer et al., 2010 ; Lee et al., 2010 ; Mikulski et al., 2010 ; Chikova and Grando, 2011 ; Koval et al., 2011 ; Hollenhorst et al., 2012 ; Zablotni et al., 2015 ; Jiang et al., 2016 ; St-Pierre et al., 2016 ), illustrating possible physiological functions in nociception and beyond the nervous system.
Although the cholinergic nature of MOC was known since the late 1950s ( Churchill and Schuknecht, 1959 ), the structure of this cholinergic receptor remained unknown for almost four decades. This receptor is a pentameric cation channel composed of two α9 and three α10 subunits with a nicotinic-muscarinic pharmacological profile ( Elgoyhen et al., 1994 , 2001 ; Plazas et al., 2005 ). The α10 subunit of the OHC nicotinic receptor was cloned in Elgoyhen et al. (2001) , while Vetter et al. (2007) demonstrated that both subunits (α9 and α10) are required for a functional channel. These authors concluded that the presence of the α10 subunit of nAChR is essential for MOC functioning ( Elgoyhen et al., 1994 , 2001 ; Vetter et al., 1999 ; Weisstaub et al., 2002 ).
α9-KO Mice
In Vetter et al. (1999) generated a strain of mice carrying a null mutation of the Chrna9 gene, giving rise to α9-KO mice. This mouse was developed by replacing exon 4, which contains the coding sequence of the ligand-binding site and its surrounding sequences of the intron of the Chrna9 gene, with a neomycin resistance gene. This translates into a nonfunctional α9 subunit, allowing investigations of the α9- nAChR in vivo .
Despite no evident abnormalities in the gross cochlear morphology of α9-KO mice, as compared to wild type (WT), including the cochlear duct, hair cells, supporting cells, and spiral ganglion neurons ( Vetter et al., 1999 ), several abnormalities have been described in the morphology and number of synaptic terminals between MOC neurons and OHCs. Specifically, larger and fewer MOC synaptic terminals have been described in α9-KO mice ( Vetter et al., 1999 ). For instance, in the middle cochlear turn of WT mice, most of the OHCs are contacted by two efferent terminals while in the α9-KO mice, most OHCs are contacted by a single efferent terminal. This evidence indicates that synaptic development of MOC neurons is altered in the α9-KO mice, raising a caveat for the interpretation of these results.
Auditory Function in the α9-KO Mice
As evaluated by behavioral detection of tones in quiet and background noise conditions, hearing thresholds are normal in the α9-KO mice ( Prosen et al., 2000 ; May et al., 2002 ). Similarly, electrophysiological assessments using wave V thresholds of auditory brainstem responses (ABR) have confirmed the presence of normal hearing thresholds in the α9-KO mice compared to WT mice ( Terreros et al., 2016 ). As expected, MOC function is abolished in the α9-KO mice when evaluated by electrical stimulation of MOC fibers at the floor of the fourth ventricle ( Vetter et al., 1999 ), and diminished when assessed with contralateral sound stimulation and measuring auditory-nerve responses through wave I from ABR ( Terreros et al., 2016 ).
Other auditory alterations have been found using the prepulse inhibition of the Acoustic startle response, as it is decreased in the α9-KO mice and increased in mutant mice that have an enhanced MOC function (L9’T-KI) ( Taranda et al., 2009 ; Allen and Luebke, 2017 ; Lauer et al., 2021 ). Furthermore, the α9-KO mice exhibit deficits in sound localization tasks, as evaluated in conditioned lick suppression tasks to assess the minimum audible angle ( Clause et al., 2017 ). Evidence shows that frequency selectivity is also impaired in mice models lacking MOC transmission, as suggested by electrophysiological and behavioral studies ( Clause et al., 2014 , 2017 ). In sum, the lack of MOC cholinergic transmission does not alter hearing thresholds, affecting, however, more complex auditory functions, such as pre-pulse inhibition, frequency selectivity and sound localization. Changes in auditory function in α9-KO are summarized in Table 1 .
Auditory Efferent Corticofugal Pathways
One of the proposed functions of the AES is the suppression of irrelevant auditory distractors during visual attention. This hypothesis emerges from studies performed in behaving cats and chinchillas during visual selective attention tasks, in which the animals showed a reduction of auditory nerve responses to distracting sounds ( Oatman et al., 1971 ; Delano et al., 2007 ). This idea was tested in α9-KO mice that were trained in a two-choice visual discrimination task with auditory distractors ( Terreros et al., 2016 ). In this task -similar to that used previously in chinchillas-, α9- KO mice made fewer correct responses and more omissions than WT mice when using 65 dB clicks and tones as distractors. On the other hand, when presenting broad-band noise at 90 dB as distractors, there were no differences between α9-KO and WT mice. Furthermore, the strength of the MOC reflex was positively correlated with the number of correct responses and negatively correlated with omitted trials in mice and chinchillas ( Terreros et al., 2016 ; Bowen et al., 2020 ). As a conclusion, we propose that MOC activation aids in ignoring distracting sounds at moderate sound pressure levels, while middle ear muscle activation might help in suppressing auditory distractors at high sound pressure levels.
Recent works in humans and chinchillas have raised the hypothesis that visual working memory could also recruit MOC neurons. In this line, Marcenaro et al. (2021) indicated that the strength of MOC activation by contralateral sounds is enhanced during a visual working memory task in humans. In a recent work, Vicencio-Jimenez et al. (2021) studied late responses, executed 2.5 seconds after stimulus offset, in a visual discrimination task in chinchillas, in which they had to hold the visual stimulus in the working memory buffer to respond correctly. Late responses were correlated with the strength of the MOC reflex (contralateral sound) only when studied with auditory distractors, but not when visual discrimination was performed in silence ( Vicencio-Jimenez et al., 2021 ). Together, these studies suggest that the activation of MOC neurons is a common characteristic of visual attention and visual working memory to suppress irrelevant sound during these cognitive tasks.
Brainstem Olivocochlear Function and Auditory Pathologies
The MOC reflex involves brainstem circuits, and its activation reduces the cochlear gain, in a physiological effect that can be useful protecting against acoustic trauma and aging. In this line, the strength of the MOC reflex has been correlated with the susceptibility to noise-induced hearing loss (NIHL) ( Maison and Liberman, 2000 ). This finding suggested that strengthening the MOC feedback could prevent hearing loss after noise exposure. Taranda et al. (2009) used the L9’T-KI mice with enhanced MOC function to confirm the idea that brainstem MOC feedback can reduce the damage induced by acoustic trauma.
Age-related hearing loss or presbycusis is a highly prevalent condition in elderly people, especially in individuals chronically exposed to acoustic noise. The disorder is characterized by reduced hearing sensitivity and speech understanding in noisy environments, altered central auditory processing, and a higher risk for developing cognitive impairment and dementia ( Panza et al., 2015 ). On this basis, the strength of the efferent reflex has been linked to the prevention of the development of hearing loss, cochlear synaptopathy and age-related hair cell loss ( Boero et al., 2018 , 2020 ).
Therefore, enhancing MOC feedback arises as a promising approach to prevent age-related hearing loss. In this context, the α9/α10 nAChR offers varied opportunities to be a therapeutic target in the future. Two molecules known for being able to enhance the activity of this receptor are ascorbic acid and ryanodine ( Zorrilla De San Martín et al., 2007 ; Boffi et al., 2013 ), opening the possibility of investigating their effects to prevent presbycusis. Although clinical evidence is limited, there is at least one report in humans showing a correlation between ascorbic acid intake and improved hearing in the older population ( Kang et al., 2014 ). High-quality clinical trials are necessary to further investigate these molecules as treatments for age-related hearing loss.
The prevention or treatment of NIHL could also be intervened by pharmacological modulation of α9/α10 nAChR. Like by presbycusis, drugs that augment the effect of the MOC system on the OHCs could be used to prevent NIHL in workers performing in noisy conditions. Exposure to loud noise has short and long-term consequences since there may be a transient attenuation of hearing acuity or a permanent threshold shift ( Le et al., 2017 ). However, there are occasions when exposure to loud noises generates an increase in hearing thresholds in frequencies that are not measured through conventional audiometry (Conventional audiometry measures up to 8 kHz, therefore frequencies between 8 and 20 kHz are not routinely studied). It has been proposed that the increase of the hearing thresholds in frequencies above 8 kHz could reflect hidden hearing loss (HHL) in humans, known as cochlear synaptopathy in animal models ( Kujawa and Liberman, 2009 ).
Evolutionary Role of Auditory Efferents
Despite the evidence supporting an important role for MOC cholinergic transmission in protecting against acoustic trauma and cochlear synaptopathy, it is unlikely that this was a critical factor in the evolutionary history of the AES. It is far more probable that its evolution is linked to its function with hearing in noise. The reason is that high-intensity noise that induces acoustic trauma is not common in natural conditions, making it more likely that this function arose as an exaptation or evolutionary spandrel ( Gould, 1997 ; Smith and Keil, 2015 ). If we consider this evolutionary context, some interesting questions about this receptor arise. How has it changed in different mammals? What impact has the evolutionary history of different mammalian families had on the OHC nAChRs? For example, given the role of the MOC system in the regulation of cochlear gain, it is likely that it plays a part in the suppression of the individual’s own vocalizations, protecting the cochlea from overstimulation ( Lauer et al., 2021 ). This would make it plausible to observe adaptations in the α9/α10 nAChR associated with animals that have high-intensity types of vocalizations, such as bats, cetaceans, and some primates.
In this context, future research in the α9-KO mice could evaluate the differences in vocalization patterns between them and WT mice. Furthermore, in the case of animals with high-intensity vocalizations, such as bats that vocalize above 100 dB ( Moss and Schnitzler, 1995 ), protection against acoustic trauma might be a function directly selected in the MOC system. Therefore, it also seems feasible to find adaptations in the receptor associated with a high sound intensity environment.
Author Contributions
FM and GT: original idea. FM, GT, SV-J, PJ, and PHD: manuscript writing. PJ: figure editing. FM, GT, and PHD: manuscript editing. All authors contributed to the article and approved the submitted version.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | CC BY | no | 2024-01-16 23:36:46 | Front Neurosci. 2022 Apr 27; 16:866161 | oa_package/d6/a7/PMC9094045.tar.gz |
||||
PMC9117295 | 35115661 | Introduction
Genetic variations at the DLG2 gene locus are linked to multiple psychiatric disorders including schizophrenia [ 1 , 2 ], bipolar [ 3 , 4 ], autism spectrum [ 5 – 7 ], attention deficit hyperactivity [ 8 ], intellectual disability [ 9 , 10 ], and Parkinson’s disease [ 11 , 12 ]. This clinical evidence indicates the significance of DLG2 in the aetiology of psychopathologies common to a broad range of disorders and suggests core underlying mechanisms and biological phenotypes. Many of the genetic variations are predicted to produce a loss of function for DLG2 in one copy of the gene but the resulting changes in neuronal function are poorly understood [ 13 – 16 ].
DLG2 is a member of a family of membrane-associated guanylate kinase (MAGUK) proteins enriched at synaptic locations that encodes the scaffolding protein PSD93 (also referred to as DLG2 or Chapsyn-110). DLG2 interacts directly with a number of other proteins in the postsynaptic density of excitatory synapses, such as NMDA receptor (NMDAR) subunit GluN2B [ 17 – 20 ], AMPA receptor auxiliary subunit stargazin [ 21 ], potassium channels Kir2.3 [ 22 ], Kir2.2 [ 23 ] and Kv1.4 [ 24 ], as well as proteins involved in potassium channel palmitoylation, cell adhesion, microtubule assembly, and cell signalling, palmitoyltransferase ZDHHC14 [ 25 ], neuroligin1-3 [ 17 ], Fyn [ 26 , 27 ], ERK2 [ 28 ], GKAP [ 29 ], and MAP1A [ 30 ]. Uniquely to the MAGUK family, DLG2 is targeted to the axon initial segment where it regulates neuronal excitability via its interactions with potassium channels [ 25 , 31 ]. At a functional level, homozygous Dlg2−/− knockout mice have altered glutamatergic synapse function [ 32 – 35 ] and impaired long-term potentiation (LTP) in the hippocampus [ 33 ]. These synaptic perturbations could underlie the common cognitive psychopathologies of the psychiatric disorders associated with DLG2 . Indeed, Dlg2−/− mice have been shown to have impaired performance in the object-location paired associates learning task [ 36 ]. Homozygous Dlg2−/− mice also exhibit increased grooming behaviour [ 35 ] and altered social interaction but without consistent effects in negative valence tasks such as the open field test [ 35 , 37 ].
Impaired synaptic plasticity resulting from the loss of DLG2 is a potential biological phenotype underpinning trans-diagnostic cognitive psychopathologies but the mechanism by which reduced DLG2 expression leads to impaired synaptic plasticity is unclear. Furthermore, mechanistic understanding for the impact of DLG2 loss may reveal new targets for therapeutic intervention. Most animal models for DLG2 loss have employed full knockouts of the gene but these do not accurately represent the heterozygous nature of DLG2 genetic variants in patient populations and potentially engage compensatory expression by other MAGUK proteins [ 32 , 38 ] that is not present in heterozygous reduced gene dosage models (Supplementary Fig. S3 ). Therefore, here we investigate the combined impact of low gene dosage DLG2 on synaptic function, neuronal excitability and morphology using a novel CRISPR-Cas9 engineered heterozygous Dlg2 + /− (het) rat model to understand the interactions that lead to impaired synaptic plasticity and cognitive function. | Materials and methods
Animals and husbandry
All procedures were carried out under local institutional guidelines, approved by the University of Bristol Animal Welfare and Ethical Review Board, and in accordance with the UK Animals (Scientific procedures) Act 1986. The experiments employed a novel Dlg2 + /− heterozygous rat model generated on a Hooded Long Evans background using CRISPR-Cas9 genomic engineering that targeted a 7 bp deletion to exon 5 of the rat Dlg2 gene, resulting in a downstream frame shift in exon 6 and the production of a premature stop codon that led to a reduction in Dlg2 protein levels in the hippocampus (Supplementary Figs. S1 – 3 ). Full details of the creation, quality control and off-target assessment of the Dlg2 + /− model can be found in Supplementary information. Male Dlg2 + /− rats were bred with wild type (wt) female rats, generating mixed litters of Dlg2 + /− and wt littermate offspring. The Dlg2 + /− animals were viable and showed no signs of ill health, with normal litter sizes containing the expected Mendelian ratio of positive to wt genotypes and normal sex ratios, there were no effects in survival of the Dlg2 + /− rats to adulthood and no effects on general morbidity, including fertility, or mortality throughout the lifespan. Further details of animal husbandry, breeding strategy and viability are described in Supplementary information. Approximately equal numbers of each sex rats aged P50-75 were used, with experimenter blind to genotype during experiments and data analysis.
Methods on brain slice preparation, electrophysiology, protein quantification and computational modelling are in the supplementary information.
Statistical analysis
3-way and 2-way ANOVA, 3-way repeated measures ANOVA, Komolgorov–Smirnov test, as well as paired and unpaired t-tests were used as appropriate, with full statistical results available in Supplement 2 . Genotype, sex, and dorsal-ventral aspects of the hippocampus, as well as repeated measurements, were factored into all analyses, as appropriate. Genotype was viewed as the primary output factor shown in the figures. No genotype-sex and only one genotype-aspect interactions were found indicating limited impact of sex and aspect on the primary genotype factor results but where effects of other factors were found, the data are presented in Supplement 1 (Supplementary Figs. S8 – 16 , S18 and 19 ). Inclusion of animals in the analysis of a subset of experiments using multi-level general linear mixed modelling did not affect the statistical results indicating that the major source of variability arose between cells rather than animals. Therefore, cell was defined as the experimental unit and we report numbers of cells and animals in figure legends. α = 0.05 was applied for all tests, except the Kolmogorov–Smirnov test where α = 0.01 was applied. The degrees of freedom, F , and P values are presented in the text, figures, and Supplement 2 . | Results
Dlg2 + /− heterozygous knockout rats were generated by CRISPR-Cas9 targeting of the Dlg2 gene (Supplementary information). In this model, DLG2 protein expression levels were reduced by ~50% in hippocampus without effects on expression of other components of the postsynaptic density, including the closely related MAGUK DLG4 (PSD95) and the GluN1 NMDAR subunit (Supplementary Fig. S3 ). The specific reduction in DLG2 protein expression was replicated in tissue from prefrontal cortex, posterior cortex and cerebellum and mirrored by a ~50% reduction of dlg2 mRNA expression, without any change in dlg1 , dlg3 or dlg4 mRNA [ 39 ].
Associative LTP
The learning of novel representations in CA1 is thought to arise from the dendritic integration of spatiotemporally coherent inputs from the entorhinal cortex (via the temporoammonic (TA) pathway) and CA3 (via the Schaffer collateral (SC) pathway) that can summate supra-linearly to drive associative LTP (aLTP) [ 40 – 47 ]. LTP in CA1 of hippocampus is impaired in homozygous Dlg2 −/− mice [ 33 ] but the interpretation of these results is complicated by the potential for compensation by other MAGUK proteins [ 32 , 38 ]. The heterozygous Dlg2 +/− rat offers the opportunity to test whether LTP is impaired in the absence of any MAGUK compensation.
aLTP was assessed in the CA1 region of hippocampal slices by stimulating the SC and TA pathways simultaneously with a theta burst stimulation pattern whilst recording from CA1 pyramidal neurons (Supplementary Fig. 1A, B ). An additional independent SC pathway was also stimulated as a negative control and a pathway check was done to confirm pathway independence (Supplementary Fig. S4 ). The induction protocol resulted in robust aLTP in the wts but reduced aLTP in the Dlg2 + /− hets in both SC and TA test pathways (Fig. 1C–E ). During induction the number of elicited action potential bursts and single spikes was reduced in the Dlg2 + /− hets, despite baseline EPSC amplitudes being unchanged indicating that all neurons received similar inputs, but there was a trend suggesting reduced overall depolarisation in response to synaptic stimulation (Fig. 1F–J ). Both spike number and depolarisation during induction correlated with LTP in the SC pathway but not the TA pathway (Supplementary Fig. S5 ). There was no effect of genotype on the after-hyperpolarisation (Fig. 1K ). This suggests that in the Dlg2 + /− hets, the integration of synaptic inputs from the SC and TA pathways is impaired, reducing dendritic depolarisation and action potential spiking which are the drivers of aLTP.
To test the necessity of action potentials for aLTP, a paired theta burst LTP induction protocol was used, where action potentials were driven by somatic current injection to bypass dendritic integration, and spikes were paired with simultaneous SC pathway stimulation (Fig. 1L ). Under these conditions, robust LTP was induced in the SC pathway in both genotypes, with the TA pathway acting as negative control (Fig. 1M–O ). Similarly, when aLTP was tested using baseline EPCSs doubled in amplitude, maximal LTP was induced and there was no effect of genotype (Supplementary Figs. S6 and 7 ). As expected, aLTP was greater in ventral slices whereas theta burst LTP was greater in dorsal slices (Supplementary Figs. S8 and 9 ). This indicates that the hets are fundamentally able to undergo LTP but their ability to integrate inputs is impaired.
Synaptic integration
To directly test synaptic integration, the number of activated synapses required to generate supra-linear summation of EPSPs across multiple dendrites was assessed, which is a measure of the ability for synapses to integrate across the dendritic arbor [ 40 , 42 , 47 ] and allowed comparison of synaptic integration between genotypes. To activate increasing numbers of synapses, the SC pathway was stimulated with increasing intensity to activate synapses with a random spatial distribution across the proximal and basal dendritic arbor. The number of activated synapses was measured by the slope of a single EPSP, and the integration of synapses assessed by the amplitude and duration of a compound summated EPSP (area under the curve – AUC) elicited by repetitive high frequency synaptic stimulation (Fig. 2A ). As stimulation intensity was increased the number of active synapses increased in a linear relationship with the amplitude and durations of the summated compound EPSP until a “change point” was reached (see methods) after which the relationship became supra-linear because the duration of the compound EPSP increased (Fig. 2B ), indicative of activation of regenerative or plateau potentials within the dendrites [ 40 , 42 , 47 ]. The inhibition of these regenerative potentials by D-APV demonstrated their dependence on NMDAR activation (Fig. 2A, B ). The change point was increased in the Dlg2 + /− hets (Fig. 2B, C ), indicating that het neurons required more synaptic inputs to undergo the transition to supra-linear integration. Additionally, the maximum duration of the compound EPSP as a ratio to the corresponding slope of the single EPSP was reduced in the Dlg2 + /− hets (Fig. 2D ). This again indicates that Dlg2 + /− hets require more synaptic input to integrate dendritic inputs and produce the supra-linear regenerative potentials important for aLTP.
NMDAR currents
Synaptic integration is driven by NMDARs and protein-protein interaction studies have reported DLG2 to interact directly with NMDAR subunits [ 17 – 19 ] and with AMPAR indirectly [ 21 ]. Further, DLG2 has also been shown to affect glutamatergic function in homozygous Dlg2−/− models, albeit with variable results in AMPA/NMDA ratio [ 32 – 34 , 48 , 49 ]. To investigate whether glutamatergic function was affected in Dlg2 + /− rats and whether this might explain the impairment in synaptic integration and aLTP, the AMPA/NMDA ratio was measured in CA1 pyramidal neurons (Fig. 3A ). Dlg2 + /− hets had a reduced AMPA/NMDA ratio in the SC pathway, with no effect in the TA pathway (Fig. 3B–D ). AMPAR-mediated miniature excitatory postsynaptic currents (mEPSCs, Supplementary Fig. S10 ) resulting from the activity of single synapses were recorded to probe whether the reduction in AMPA/NMDA ratio resulted from a reduction in AMPA, an increase in NMDA, or a combination of the two. The slow kinetics and small amplitude of NMDAR-mediated mEPSCs make them difficult to detect accurately. AMPAR-mediated mEPSCs from synapses on proximal dendrites are more detectable than those from more distal synapses due to signal attenuation and therefore recorded mEPSCs will arise from the most proximal synapses [ 50 ]. There was no difference in the distributions of mEPSC amplitude, interevent interval, or decay tau across genotype (Fig. 3E–I ). Paired-pulse facilitation, measured in the AMPA/NMDA ratio experiment, was also not different across genotype in either pathway (Supplementary Fig. S11 ). Together, these results show no change in postsynaptic AMPAR function and presynaptic glutamate release probability in the SC pathway. It follows that the AMPA/NMDA ratio effect in the SC pathway was due to an increase in NMDAR function. This could result from either an increase in NMDAR number or a change in subunit composition between GluN2A and GluN2B. To test subunit composition, NMDAR currents were isolated (Fig. 3J ) and the selective GluN2B negative allosteric modulator RO256981 was applied. RO256981 decreased EPSC amplitude (Fig. 3J–L ) and increased the EPSC decay time in both the SC and the TA pathways (Fig. 3M, N ). There was a trend toward a genotype x drug interaction in the EPSC amplitude measurement in the SC pathway but no genotype x drug interaction in the EPSC decay kinetics. Together, these results show similar NMDAR subunit composition across genotype and therefore the enhancement in synaptic NMDAR function likely arises from increased receptor numbers at SC synapses, despite overall neuronal receptor expression remaining constant. Enhanced synaptic NMDAR function is expected to increase synaptic integration and therefore cannot explain the observed decrease in integration.
Input resistance
Reduced synaptic integration in dendrites could arise from multiple mechanisms. Based on previous findings in CA1 pyramidal neurons the three most likely are: i) Reduced expression of hyperpolarisation-activated cyclic nucleotide-gated (HCN) channels that regulate neuronal excitability and contribute to dendritic integration [ 51 – 53 ], ii) Increased expression of small conductance calcium-activated potassium (SK) channels that inhibit NMDARs at synapses, reducing dendritic integration and LTP [ 54 – 56 ], iii) Reduced input resistance by increased potassium channel expression particularly in dendritic locations to reduce dendritic integration and LTP [ 42 , 47 , 57 – 61 ]. Each of these mechanisms was directly tested.
Pharmacological blockade of HCN channels with ZD7288 produced robust effects on neuronal excitability (including spiking, sag, and input resistance) but there were no differential effects across genotype (Supplementary Fig. S12 ). There were also no genotype-specific effects on cellular resonance or impedance that are directly dependent on HCN channels [ 62 – 66 ] (Supplementary Fig. S13 ) with some effects of sex and aspect (Supplementary Figs. S14 , 15 , 17 , 18 ). As previously described, the SK channel blocker apamin produced an increase in EPSP duration in the SC and TA pathways (Supplementary Fig. S12 ) indicating increased NMDAR activation during synaptic stimulation [ 54 – 56 ]. However, the regulation of synaptic NMDAR function by SK channels was similar between genotypes indicating no change in SK channel expression. Therefore, differential HCN or SK channel function is unlikely to explain the difference in synaptic integration between genotypes.
To assess input resistance, measurements were analysed from voltage clamp experiments (using identical conditions to the LTP experiments in Fig. 1 ) and in current clamp experiments. In both these separate and independent data sets Dlg2 + /− hets had reduced input resistance (Fig. 4A–D ). This increase in electrical leak in the Dlg2 + /− hets is predicted to reduce cross-talk between synapses and their integration leading to a reduced spike output but it is also expected to reduce the spike output in response to somatic current injection. However, despite reduced input resistance in the Dlg2 + /− hets, there was no effect of genotype on spike output to current injection (rheobase) (Fig. 4E, F ). This could be explained by a depolarised resting membrane potential (Fig. 4G ) and a trend towards hyperpolarised action potential spike threshold (Fig. 4H ) in the Dlg2 + /− hets indicating that smaller membrane potential depolarisations were required to initiate spikes. There was no effect of genotype on spike half-width, maximum spike slope, spike amplitude, or capacitance, and a slight decrease in latency to spike in the Dlg2 + /− hets (Supplementary Fig. S16 ).
Reduced input resistance in the Dlg2 + /− hets could be explained via two mechanisms: i) increased membrane area through greater dendritic branching and extent [ 42 , 67 ] or ii) increased membrane conductance, most likely caused by increased potassium channel expression. To test the first mechanism, a subset of neurons from the intrinsic excitability experiments were filled with neurobiotin to allow post hoc morphological analysis. Analysis of these neurons revealed that het neurons were smaller than wt neurons (Fig. 4I ) and had reduced dendritic branch number and total dendritic branch length but had similar mean dendritic branch lengths (Fig. 4J–L ). Scholl analysis demonstrated that Dlg2 + /− het neurons had reduced dendritic arborisation overall, with the most striking differences in the basal and proximal apical regions (Fig. 4M ). Contrary to the predicted neuronal size – input resistance relationship, there was no correlation between total dendritic branch length and input resistance (Fig. 4N ). Therefore, reduced neuronal arborisation in the Dlg2 + /− hets cannot explain the observed reduced input resistance and instead increased potassium channel expression is the most likely explanation.
Computational modelling of synaptic integration in representative reconstructed pyramidal neurons also predicted increased potassium channel expression as the mechanism underlying reduced input resistance (Supplementary Fig. S19 ) and enabled exploration of the likely potassium channel subtypes mediating reduced synaptic integration. DLG2 interacts with potassium inward rectifier Kir2.3 [ 22 ] and Kir2.2 [ 23 ] as well as A-type Kv1.4 [ 24 ] channels which therefore represent potential candidates to underpin decreased input resistance and synaptic integration. The model suggested that A-type potassium channels are the most likely candidates upregulated in the Dlg2 + /− hets to underly the dendritic integration deficits (Supplementary Fig. S19 ). However, any mechanism that reduces input resistance is predicted to facilitate dendritic integration in the Dlg2 +/− hets.
Rescue of synaptic integration and plasticity
The aLTP, theta burst LTP, and dendritic integration results from Figs. 1 and 2 suggest that, given enough synaptic input, Dlg2 + /− hets can express LTP despite their reduced input resistance. It follows that by increasing input resistance in the Dlg2 + /− hets, dendritic integration and aLTP could be effectively rescued. Three separate methods to increase input resistance were tested for their effectiveness in rescuing dendritic integration. The first was the relatively broad-spectrum voltage-sensitive potassium channel blocker, 4-aminopyridine (4-AP) [ 68 ], the second was the selective Kv1.3, Kv1.4 blocker CP339818 [ 69 ] and the third was activation of muscarinic M1 receptors [ 54 , 70 ]. 4-AP caused an increase in input resistance, a reduction in the supra-linearity change point, a trend toward increased maximum duration of the compound EPSP as a ratio to the corresponding slope of the single EPSP, and a repolarisation in resting membrane potential (Fig. 5A–E ). The effects of 4-AP were not genotype-specific, as there were no drug x genotype interactions. These results support the computational modelling predictions that voltage-sensitive potassium channels attenuate dendritic integration and blocking them facilitates it. Since DLG2 interacts with Kv1.4, the selective blocker CP339818 was used to test whether the upregulation of these channels was responsible for reduced dendritic integration. However, CP339818 had no effect on input resistance or dendritic integration (Supplementary Fig. S20 ) indicating that upregulation of these specific A-type potassium channels does not underpin the reduction in dendritic integration in the Dlg2 + /− hets but does not rule out a role for other A-type channels.
These results demonstrate, as predicted, that blocking a subset of potassium channels activated around the resting membrane potential facilitates dendritic integration. However, due to the considerable heterogeneity of potassium channels and their ability to compensate for one another coupled with limited availability of selective pharmacological tools, identifying and targeting the precise channels that cause reduced input resistance in the Dlg2 + /− hets is challenging. An alternative approach, and one with greater therapeutic potential, is to rescue the input resistance reduction indirectly, for example by activation of cholinergic muscarinic M1 receptors that inhibit potassium channel function and increase dendritic excitability [ 42 , 47 , 57 – 61 ]. Support for this approach was found using the highly selective muscarinic M1 receptor allosteric partial agonist 77-LH-28-1 [ 71 ] which increased input resistance, reduced the change point, increased the maximum duration of the compound EPSP as a ratio to the corresponding slope of the single EPSP, and depolarised the resting membrane potential (Fig. 5F–J ). However, there were no significant drug x genotype interactions. Similar results were found for the broad-spectrum non-hydrolysable acetylcholine analogue carbachol (Supplementary Fig. S21 ).
These results suggest that pharmacological enhancement of dendritic excitability and integration may be sufficient to rescue aLTP in the Dlg2 + /− hets. Therefore, the aLTP experiment was repeated in the presence of 77-LH-28-1. This rescued aLTP in the Dlg2 + /− hets with robust aLTP in SC and TA pathways (Fig. 5K–M ). In addition, unlike in the absence of 77-LH-28-1, there was no effect of genotype and no pathway x genotype interaction (Fig. 5M ), indicating 77-LH-28-1 selectively rescues aLTP in the Dlg2 + /− hets. Importantly, baseline EPSC amplitude did not differ among pathways and across genotype (Fig. 5N ), indicating that the amount of synaptic input received was similar in all conditions. Analysis of the aLTP induction phase revealed that 77-LH-28-1 rescued synaptic summation and the resulting action potential spiking (Fig. 5O–R ) as well as plateau potential generation (Supplementary Fig. S22 ), with the genotypic differences for number of bursts, EPSP summation, and spike number disappearing. Taken together, Fig. 5 shows 77-LH-28-1 reduced the threshold for dendritic integration in both wts and Dlg2 + /− hets but selectively facilitated aLTP in the Dlg2 + /− hets indicating induction of synaptic plasticity in the Dlg2 + /− hets is more sensitive to increased dendritic excitability and synaptic integration. | Discussion
NMDAR currents are increased in the Dlg2 + /− heterozygous rat model. Additionally, dendritic arborisation is reduced. These observations would be expected to combine to enhance neuronal excitability, dendritic integration and synaptic plasticity. Instead, the effects are entirely offset, and indeed reversed, by a concomitant reduction in input resistance caused by an increase in potassium channel expression, potentially A-type potassium channels. This increase in electrical leak is the dominant effect, resulting in a final phenotype where dendritic integration and aLTP are impaired. Crucially, dendritic integration can be rescued by potassium channel block or activation of muscarinic M1 receptors, the latter of which can also rescue synaptic plasticity. These phenotypes are potentially particularly relevant since the Dlg2 + /− rat model relates to human single copy genetic variants.
The direct interaction between DLG2 and GluN2b NMDAR subunits suggest the most important effects of DLG2 perturbations are on NMDAR function – synaptic integration and plasticity. However, previous studies on Dlg2−/− full knockout models have either reported no changes in the AMPA/NMDA ratio or a reduction in the AMPA/NMDA ratio due to reduction in AMPAR function [ 32 – 34 , 48 , 49 ], Here, AMPAR function was unchanged and instead we found an unexpected increase in NMDAR currents, likely caused by increased synaptic expression selectively at Schaffer collateral synapses. There is no evidence that DLG2 is differentially expressed at Schaffer collateral vs temporoammonic synapses in CA1 so the mechanism for this selective enhancement of NMDAR expression is unknown. On its own, enhanced NMDAR currents predict enhanced aLTP, but we found the converse with aLTP impairment. This is similar to previous reports in Dlg2−/− mice. In homozygous Dlg2−/− mice CA1 LTP was normal in response to strong 100 Hz induction protocol but reduced in response to TBS given to just the SC pathway [ 33 ]. In our study using heterozygous Dlg2 + /− rats, TBS-induced LTP pairing postsynaptic stimulation with SC input was normal and an LTP deficit only became apparent in the Dlg2 + /− model when neurons were required to integrate converging inputs suggesting a nuanced and potentially behaviourally relevant phenotype in the clinically relevant Dlg2 + /− model. Furthermore, synaptic integration and the initiation of non-linear dendritic events are key determinants of feature detection and selectivity in neuronal networks [ 41 , 72 , 73 ] and a deficit in detecting events and giving appropriate salience are important features of many psychiatric disorders [ 74 ].
The dichotomy between enhanced NMDA currents and reduced NMDAR function in Dlg2 + /− rats during aLTP highlights the dominant role played by changes to intrinsic neuronal excitability; in this instance reduced input resistance caused by increases in potassium channel function. Interestingly, in a Dlg2−/− full knockout model no changes in input resistance were reported [ 75 ] highlighting again the importance of using clinically relevant models. In our Dlg2 + /− model this increase in potassium channel function does not appear to be caused by a direct interaction with DLG2 but instead as a homoeostatic regulatory mechanism perhaps to compensate for increased synaptic currents. A similar compensatory mechanism is found in other models of psychiatric disorders such as Fmr1-/y mice where changes in intrinsic neuronal excitability dominate the resulting perturbations in network processing including dendritic integration and synaptic plasticity [ 63 , 76 – 78 ]. This raises the intriguing possibility that genetic disruptions to synaptic function may generally cause homoeostatic compensations in intrinsic neuronal excitability that dominate neuronal function and present a common biological phenotype across multiple psychiatric disorders [ 79 ].
We have demonstrated in this study that the compensatory mechanisms affecting neuronal excitability can be ameliorated pharmacologically with the administration of selective agonists such as 77-LH-28-1 rescuing impairments in synaptic integration and plasticity, a proof of principle that may be applicable to other psychiatric disorder risk variants. For example, an increase in input resistance due to the administration of 77-LH-28-1 could facilitate spike backpropagation, potentially rescuing the plasticity impairment and network dysfunctions reported in the Cacna1c + /− and 22q11 deletion syndrome models of genetic vulnerability to schizophrenia [ 80 , 78 ], Highly selective muscarinic M1 receptor agonists have efficacy clinically with negligible side effects [ 81 – 84 ] making them attractive pharmaceutical tools. It remains to be seen whether behavioural impairments in DLG2 models can be rescued using similar pharmacological strategies. | Copy number variants indicating loss of function in the DLG2 gene have been associated with markedly increased risk for schizophrenia, autism spectrum disorder, and intellectual disability. DLG2 encodes the postsynaptic scaffolding protein DLG2 (PSD93) that interacts with NMDA receptors, potassium channels, and cytoskeletal regulators but the net impact of these interactions on synaptic plasticity, likely underpinning cognitive impairments associated with these conditions, remains unclear. Here, hippocampal CA1 neuronal excitability and synaptic function were investigated in a novel clinically relevant heterozygous Dlg2 + /− rat model using ex vivo patch-clamp electrophysiology, pharmacology, and computational modelling. Dlg2 + /− rats had reduced supra-linear dendritic integration of synaptic inputs resulting in impaired associative long-term potentiation. This impairment was not caused by a change in synaptic input since NMDA receptor-mediated synaptic currents were, conversely, increased and AMPA receptor-mediated currents were unaffected. Instead, the impairment in associative long-term potentiation resulted from an increase in potassium channel function leading to a decrease in input resistance, which reduced supra-linear dendritic integration. Enhancement of dendritic excitability by blockade of potassium channels or activation of muscarinic M1 receptors with selective allosteric agonist 77-LH-28-1 reduced the threshold for dendritic integration and 77-LH-28-1 rescued the associative long-term potentiation impairment in the Dlg2 + /− rats. These findings demonstrate a biological phenotype that can be reversed by compound classes used clinically, such as muscarinic M1 receptor agonists, and is therefore a potential target for therapeutic intervention.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41386-022-01277-6.
Acknowledgements
We thank Jenny Carter for coordinating the initial generation and breeding of the Dlg2+/− rat line, Rachel Humphries for computational modelling discussions and Aleks Domanski and all members of the Robinson and Mellor groups for general discussions. We also thank Hannah Jones and Estela Michail for their input in the study of morphology.
Author contributions
Conceptualisation, SG, JH, LSW, ESJR, JRM.; Methodology, SG, CO’D, JRM; Investigation and Analysis, SG, SW; Writing, SG, SW, CO’D, KLT, DMD, LSW, JH, ESJR, JRM; Supervision, KLT, DWD, LSW, JH, ESJR, JRM.
Funding
The authors gratefully acknowledge funding from Medical Research Council (UK) (CO’D, GW4 BIOMED PhD studentship to SG), Biotechnology and Biological Science Research Council (UK) (JRM), Wellcome Trust (UK)101029/Z/13/Z (JRM, PhD studentship to SW). The Dlg2+/− rats were generated as part of a Wellcome Trust Strategic Award ‘DEFINE’ (JH and LSW) and the Wellcome Trust Strategic Award and the Neuroscience and Mental Health Research Institute, Cardiff University, UK provided core support. ER has received research funding from Boehringer Ingelheim, Eli Lilly, Pfizer, Small Pharma Ltd. and MSD, and DD has received research funding from Eli Lilly, but these companies were not associated with the data presented in this manuscript.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:01 | Neuropsychopharmacology. 2022 Jun 3; 47(7):1367-1378 | oa_package/84/a0/PMC9117295.tar.gz |
|
PMC9363242 | 35944931 | Background
Description of the condition
High blood pressure is the leading cause of preventable deaths worldwide, contributing to more than 10 million deaths and 211 million disability‐adjusted life years annually, mainly due to acute coronary syndrome (formerly called ischaemic heart disease) and stroke ( Forouzanfar 2017 ).
Hypertension is typically defined by a diastolic blood pressure (DBP) ≥ 90 mmHg and a systolic blood pressure (SBP) ≥ 140 mmHg, although recent guidelines define stage 1 hypertension as a SBP ranging from 130 to 139 mmHg and a DBP ranging from 80 to 89 mmHg to reflect current blood‐pressure lowering targets ( Arnett 2019 ). After 50 years of age, SBP increases disproportionately to DBP in many individuals due to factors such as reduced arterial stiffness ( Franklin 2011 ; Lee 2010 ), with an elevated SBP being a prominent risk factor for cardiovascular events in older people ( Staessen 2000 ). In 2015, an estimated 874 million adults had a SBP of 140 mmHg or higher ( Forouzanfar 2017 ).
High sodium together with insufficient potassium intake contribute to hypertension, thereby increasing the risk of cardiovascular disease and stroke. Current global estimates of sodium intake are 3950 mg (172 mmol) per person per day ( Powles 2013 ), which equates to nearly ten grams of salt (sodium chloride) per person per day. For sodium, current World Health Organization (WHO) guidelines strongly recommend reducing intake in adults to < 2 g/day sodium (equating to about 5 g salt per day) and a downward adjusted intake in children ( WHO 2012a ). Global estimates of potassium intake for all ages, education levels, residences and sexes in 2018 are 2.3 grams per person per day ( Global Dietary Database 2022 ), which equates to an intake of 59 mmol per person per day. For potassium, WHO conditionally recommends an intake in adults of at least 90 mmol/day (3150 mg/day) and a downward adjusted intake in children ( WHO 2012b ). Although antihypertensive drug therapy is an effective method for controlling blood pressure, poor adherence to antihypertensive therapy substantially increases the near‐ and long‐term risk of stroke among patients with hypertension ( Herttua 2013 ), and access to health care such as blood‐pressure lowering medication is not universally available.
Hypertension is also a major contributor to the development and progression of chronic kidney disease (CKD). Adequate blood pressure control has been shown to be effective in slowing the progression of CKD to end stage renal disease (ESRD). In addition, adequate treatment of diabetes and cardiovascular risk factors such as dyslipidemia, are also linked to lower rates of progression to ESRD, and associated with significant reductions in cardiovascular morbidity and mortality ( Couser 2011 ).
Description of the intervention
The WHO target of a 30% relative reduction in mean population salt/sodium intake by 2025 requires effective and safe strategies to reduce population intake. One of several existing salt reduction strategies is using salt products with lower concentrations of sodium ‐ usually replaced by potassium or other minerals, or both. These low‐sodium salt substitutes (LSSS) vary widely in their formulations and are available in high‐income as well as low‐ and middle‐income countries. In many LSSS, a proportion of sodium chloride (NaCl) is replaced with potassium chloride (KCl), which shares many properties with NaCl but also has unwanted relatively offensive side tastes (bitter, acrid, and metallic). A recent narrative review ( Cepanec 2017 ) described the many various formulations of KCl‐based LSSS, which include the use of numerous taste‐improving agents (TIAs) and formulation concepts. Authors concluded that “within the great number of various compositions of KCl‐based salt substitutes, presumably the most effective ones are based on well‐balanced mixtures of KCl and NaCl, maintaining a sodium reduction range from −25% to −50% (relative to NaCl), which always include certain percentages of one or more TIAs. A typical formulation of a KCl‐based salt substitute with 50% in sodium reduction is 50% NaCl + 30‐45% KCl + 5‐20% taste‐improving agents.” Incorporating salt substitutes into population strategies to reduce sodium intake has increasingly been recognised by health authorities and public health organisations ( Greer 2020 ), especially in countries where the majority of sodium intake comes from the discretionary use of salt by households.
How the intervention might work
The dose‐response relationship between reduced dietary sodium and blood pressure change was examined in a recent systematic review (133 studies with 12,197 participants). Authors showed that in diverse populations, lower sodium intakes resulted in blood pressure reductions, with greater reductions in sodium intake producing greater reductions in BP ( Huang 2020 ). Additionally, older and non‐white populations (for SBP), as well as those with higher baseline blood pressure (for SBP and DBP) achieved greater blood pressure reductions from the same amount of sodium reduction ( Huang 2020 ). Reductions in blood pressure, such as a reduction of 5% in SBP, translate to important reductions (10%) in the risk of major cardiovascular events (e.g. fatal or non‐fatal stroke or myocardial infarction), as demonstrated by a recent meta‐analysis of individual participant‐level data from 48 trials of blood‐pressure lowering medication ( Rahimi 2021 ). Observational studies have demonstrated that stroke risk is inversely associated with dietary potassium intake ( Vinceti 2016 ). In addition, data from randomised clinical trials have shown that potassium supplements have a blood‐pressure lowering effect in people with hypertension, particularly those with a high sodium intake ( Filippini 2017 ). As described in the Description of the condition section, global estimates of potassium intake are lower than what is currently recommended by the WHO. The low dietary intake of potassium, in addition to high dietary sodium intake, contributes to hypertension. Therefore, interventions or strategies promoting the use of a potassium‐enriched LSSS could aid in reducing sodium intake, while concurrently increasing potassium intake, at the population level.
Reduction in sodium intakes ‐ either through reduction of dietary salt intake, salt substitution, or a combination of these ‐ may also be a practical choice for patients with hypertension who are resistant to antihypertensive medications or who experience side effects from medications. It may also play an important role as an adjunctive therapy in the management of hypertensive individuals by potentially lowering the doses of antihypertensive medication required. In cases where the behavioural changes required to reduce dietary salt intake are very difficult or unfeasible, salt substitutes may offer convenience and practicality. Therefore, salt substitution as a cost‐effective strategy could result in reductions in health‐care costs associated with non‐communicable diseases at a population level. However, it should be noted that if foods with high levels of non‐discretionary sodium chloride are regularly consumed, the discretionary use of LSSS may not result in a sufficient reduction in sodium intake to be beneficial.
LSSS may offer a potential solution for the food industry to develop lower sodium food products without compromising on taste or safety, particularly in countries where non‐discretionary sodium intake contributes significantly to the overall population intake of sodium. However, because KCl costs more than NaCl, significant consumer demand, industry‐targeted subsidies or taxes on high sodium content foods will likely be required before the food industry will absorb the costs of product reformulation. Therefore, the application of LSSS strategies at population level to reduce sodium consumption is dependent on several factors, including its main uses within a population, as well as its effects on food taste and cost ( Greer 2020 ).
The greatest risk with potassium‐based LSSS is the potential for adverse effects resulting from hyperkalaemia, particularly the increased risk of arrhythmias and sudden cardiac death. The risk of adverse events is greater at higher levels of serum potassium. There is no absolute threshold at which these adverse events occur, however a serum potassium level of ≥ 6.0 mmol/L is commonly considered to be a clinically significant threshold above which the most serious manifestations of hyperkalaemia occur ( Ahee 2000 ; Hollander‐Rodriguez 2006 ). Multiple factors influence the occurrence of these adverse events, such as the underlying cause of hyperkalaemia and the rate at which serum potassium increases. High intakes of dietary potassium have not been linked to adverse effects in healthy adults and children with normal kidney function. However, the effects of high dietary potassium intakes on the risk of adverse effects are a key concern among people with impaired potassium excretion, such as those with chronic kidney disease or taking medications that impair potassium excretion ( Greer 2020 ; Kovesdy 2018 ).
A reduction of dietary sodium intake through the population‐level implementation of LSSS may also result in hyponatraemia in people with impaired renal function ( Sahay 2014 ), including older people and people treated with thiazide diuretics ( Upadhyay 2009 ).
Why it is important to do this review
If the best available evidence on replacing salt with LSSS shows adequate effectiveness and safety for important outcomes, it could be recommended as a population‐level intervention for reducing cardiovascular disease risk. However, concerns exist about potential adverse effects of LSSS, such as hyperkalaemia, particularly in those at risk, such as people with chronic kidney disease or on medications that impair potassium excretion.
The WHO is currently developing a guideline on the use of LSSS in adults and children. This review was commissioned by the WHO Nutrition Guidance Expert Advisory Group (NUGAG) Subgroup on Diet and Health in order to inform and contribute to the development of a WHO recommendation on the use of LSSS for this guideline. The results of this review, including Grading of Recommendations, Assessment, Development and Evaluations (GRADE) assessments, were discussed and reviewed by the WHO NUGAG Subgroup on Diet and Health as part of their guideline development process. | Methods
Criteria for considering studies for this review
Types of studies
The Populations, Intervention, Comparison, and Outcomes (PICO) were agreed by the WHO NUGAG Subgroup on Diet and Health, who ranked the outcomes and also agreed on subgroups and study designs to be included. As per these agreements and our prospective registration on the international prospective register of systematic reviews (PROSPERO 2020 CRD42020180162; available at https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=180162 ), we included individually randomised controlled trials (RCTs) and cluster‐randomised controlled trials (cluster‐RCTs) with true randomisation methods, regardless of the unit of allocation. We also planned to include prospective analytical cohort studies, where LSSS intake/exposure was assessed at baseline and related to any of the prespecified outcomes at a later time point using empirical data. We excluded RCTs with a cross‐over trial design if data for the first phase per group were unavailable, due to the possible period and carry‐over effects that would arise with the eligible dietary interventions/exposures and outcomes not being easily reversible, as required for a valid cross‐over design ( Younge 2015 ). We additionally excluded cluster‐RCTs with fewer than two intervention and two control clusters.
Types of participants
We included studies in the general population, from any setting in any country, including participants with the following condition(s) and/or risk factors: hypertension, cardiovascular disease (CVD), diabetes mellitus, renal impairment and those taking medications that impair potassium excretion.
In accordance with the WHO NUGAG PICO conceptualisations and agreements, the following three comparisons were planned if data allowed:
LSSS versus regular salt or no active intervention in adults (aged 18 years and older) LSSS versus regular salt or no active intervention in children aged 2 to < 18 years LSSS versus regular salt or no active intervention in pregnant women
Types of interventions
We included studies that assessed the health effects associated with the use of LSSS at an individual, household or community level. LSSS interventions/exposures of any type or duration were included, provided they aimed to replace the dietary intake of any amount of sodium with another mineral or compound. Studies investigating either discretionary (i.e. salt on table or added during cooking) or non‐discretionary use of LSSS (i.e. included during food manufacturing), or both, were included.
Eligible comparators/controls included the use of regular salt (NaCl) or no active intervention to reduce salt intake. Studies where the control group received only basic information on sodium reduction at baseline, were included. Studies with multi‐component interventions were included if effects of LSSS could be isolated from the multifactorial design.
We excluded studies with multi‐component interventions if the additional intervention components were not aimed primarily at promoting LSSS use by participants or communities, but were instead focussed more broadly on reducing sodium intake (e.g. changing lifestyle and dietary behaviour of which LSSS use is only one component) or aimed at improving health in general (e.g. counselling for exercise or smoking cessation), such that LSSS effects could not be isolated.
Types of outcome measures
We did not exclude studies on the basis of outcomes measured. However, we did exclude studies measuring only sensory or organoleptic outcomes (e.g. taste of or preference for LSSS).
Primary outcomes
Table 3 , Table 4 and Table 5 detail the prespecified primary outcomes for each comparison, with outcome ranking by the WHO NUGAG Subgroup on Diet and Health indicated as follows: critical c , important i and not important ni . The following primary outcomes were regarded as safety outcomes related to the intake of LSSS with potassium: change in blood potassium, hyperkalaemia and hypokalaemia.
Secondary outcomes
Table 3 , Table 4 and Table 5 detail the prespecified secondary outcomes for each comparison, with outcome ranking by the WHO NUGAG Subgroup on Diet and Health indicated as follows: critical c , important i and not important ni . The following secondary outcomes were regarded as safety outcomes related to the intake of LSSS with potassium: adverse events, renal function and hyponatraemia.
Search methods for identification of studies
The search strategy was developed, peer‐reviewed and implemented by Cochrane information specialists in consultation with the review team. We used a comprehensive search strategy aiming to identify all eligible studies regardless of language, publication type or publication status. Publication date restrictions were not imposed, except for conference abstracts identified through Embase which covered only those published in the past two years. With this, we specifically aimed to find recent proceedings of studies that may not yet have been published as full articles at the time of the search. We used filters for trials ( Lefebvre 2022 ), cohort studies ( Li 2019 ) and adverse effects ( Golder 2006 ; Golder 2012 ) to inform our search strategy.
Electronic searches
We aimed to identify RCTs and prospective analytical cohort studies through systematic searches of the following bibliographic databases:
MEDLINE (PubMed, from 1946 to 18 August 2021) Embase (Ovid, from 1947 to 18 August 2021) Cochrane Central Register of Controlled Trials (CENTRAL), in the Cochrane Library (Issue 8 of 12, 2021) Web of Science Core Collection with Indexes SCI‐Expanded, SSCi, CPCI‐S (Clarivate Analytics, from 1970 to 18 August 2021) Cumulative Index to Nursing and Allied Health Literature (CINAHL) (EBSCOhost, from 1937 to 18 August 2021)
We also conducted a search of ClinicalTrials.gov (www.ClinicalTrials.gov) and the WHO International Clinical Trials Registry Platform (ICTRP) for ongoing and unpublished trials (https://trialsearch.who.int/). The date of the last searches here was also 18 August 2021. Search strategies per database/registry searched are detailed in Appendix 1 .
Searching other resources
To identify any additional eligible records, two reviewers also screened the reference lists of three recent systematic reviews evaluating the effects of LSSS use ( Hernandez 2019 ; Jafarnejad 2020 ; Jin 2020 ), as well as the bibliographies of all studies included in this review.
Data collection and analysis
Selection of studies
After de‐duplication of search records, titles and abstracts were screened independently by two reviewers using Covidence ( Covidence ). Full‐text articles for all records identified as potentially eligible for inclusion were then screened by two reviewers independently to determine final eligibility. Records where we could not obtain the full text or more details of the study in order to determine eligibility, were classified as ‘Studies awaiting classification’. We resolved any disagreements between reviewers at any stage of the eligibility assessment process through discussion and consultation with a third reviewer, where necessary.
Data extraction and management
Two reviewers independently extracted data onto forms designed and piloted for the review, and we resolved any disagreements during the data extraction and management process through discussion and consultation with a third reviewer, where necessary. Where necessary, translations of records in non‐English were obtained. We extracted data on the following:
Study details, including author details, conflict of interest declaration, funding source, setting Methods, including design, aim, dates, limitations as reported by authors, sample size calculation, participants, including eligibility criteria; method of recruitment; number of clusters per trial arm and how authors accounted for the effect of clustering; participant flow details such as number assessed for eligibility, number randomised; baseline characteristic such as demographic and lifestyle characteristics, health status and intake of sodium and potassium; and any differences in these characteristics by trial arm Interventions/exposures, including description, delivery/use, addition of fortificants, duration, co‐interventions, integrity of delivery Comparators, including description, delivery, duration, co‐interventions, integrity of delivery Outcomes, including numeric data relevant to all primary and secondary outcomes according to the following time point ranges, when available: baseline to 3 months, > 3 to 12 months and > 12 months, except for cardiovascular events, all‐cause mortality, cardiovascular mortality and adverse events, for which data were extracted for the duration of the study. When outcome data were reported at more than one point, we extracted data from the latest point available. For studies that did not use the International System of Units (SI) to report outcomes, we converted values to SI units, where possible. For trials, we extracted change data (change in the outcome from baseline to outcome assessment) with relevant data on variance for intervention and control groups (along with numbers of participants at the time point). Where change data were not available, we extracted end‐values at the time point, along with the variance and numbers of participants for each group, or mean differences (MDs) and measures of variance per group. Where outcome data were only reported per subgroup of the total sample of study participants (e.g. participants with hypertension and participants with normal blood pressure), we extracted these data and calculated the combined mean and standard deviation (SD) for the total sample according to the guidance by ( Higgins 2020a ), where possible. We preferentially extracted and used supine over standing blood pressure measurements, 24‐hour measurements over measurements done at a single time point, and ambulatory measurements over those conducted in a clinic setting. For cohort studies, we planned to extract the most adjusted odds ratio, risk ratio, mean change or mean end values per group, when comparing the most exposed group of participants with the least exposed group, and the most adjusted regression outputs when LSSS intake was assessed at baseline and related to an outcome measure later.
Assessment of risk of bias in included studies
We assessed the risk of bias in RCTs and cluster‐RCTs using the Cochrane tool for assessment of risk of bias ( Higgins 2017 ). Two reviewers conducted these assessments independently for each included study. We resolved disagreements by discussion or through consultation with a third reviewer. We assessed the risk of bias for RCTs according to the following domains:
Random sequence generation (selection bias) Allocation concealment (selection bias) Blinding of participants and personnel (performance bias) Blinding of outcome assessment (detection bias) Incomplete outcome data (attrition bias) Selective outcome reporting (reporting bias) Other bias
We also assessed the risk of bias for cluster‐RCTs according to the following domains ( Higgins 2017 ):
Recruitment bias (selection bias) Comparability with RCTs Baseline imbalance (selection bias) Loss of clusters (attrition bias) Incorrect analysis
For cohort studies, we planned to use the following domains to assess risk of bias ( Naude 2018 ):
Were adequate outcome data available? Was there matching of less‐exposed and more‐exposed participants for prognostic factors associated with outcome, or were relevant statistical adjustments done? Did the exposures between groups differ in components other than only LSSS exposure? Could we be confident in the assessment of outcomes? Could we be confident in the assessment of exposure? Could we be confident in the assessment of presence or absence of prognostic factors? Was selection of less‐exposed and more‐exposed groups from the same population?
Overall risk of bias assessment
As this review addressed mainly objective outcomes (e.g. blood pressure measurements, laboratory‐determined electrolyte values), we did not regard blinding to be of key importance for informing judgements on overall bias. Consequently, we judged overall risk of bias for each included study using two key domains for RCTs and four key domains for cluster‐RCTs, as follows:
RCTs: allocation concealment (selection bias) and incomplete outcome data (attrition bias), and Cluster‐RCTs: baseline imbalance (selection bias), recruitment bias (selection bias), incomplete outcome data (attrition bias) and loss of clusters (attrition bias).
We assessed the overall risk of bias of each included study as follows:
low risk (low risk of bias for all key domains); high risk (high risk of bias for one or more key domains); or unclear risk (unclear risk of bias for one or more key domains).
For cohort studies, we planned to consider domains relevant to confounding to inform judgements of the overall risk of bias.
Measures of treatment effect
For dichotomous outcomes, we presented proportions; for two‐group comparisons where numbers of events and participants were provided, we presented results as risk ratios (RRs) with 95% confidence intervals (CIs). Where event rates were reported per person‐years followed in separate groups, we calculated incidence rate ratios (IRRs) with 95% CIs to enable meta‐analysis of these studies with studies reporting rate ratios for the same outcomes. Rate ratios were calculated by dividing the rate in the intervention group by the rate in the control group. The 95% confidence interval (95% CI) of these rate ratios were calculated by taking the antilogarithm of the natural log of the rate ratio (log(IRR)), plus or minus 1.96 times the standard error of the log(IRR) ( Boston University School of Public Health 2018 ). Briefly, the standard error was calculated as the square root of the sum of the inverse of events in the intervention and control group.
Where hazard ratios (HRs) were reported for incident hypertension in the stepped‐wedge trial ( Bernabe‐Ortiz 2014 ), we presented these results with 95% CIs. Due to the different way of analysing these data and the unique design of this trial, these measures were not combined with other data reporting on hypertension.
For continuous outcomes, we used the mean difference (MD) with 95% CIs if outcomes were measured in the same way between trials. Where continuous data were reported using different units across included studies, we planned to calculate and present the standardised mean difference (SMD).
Unit of analysis issues
Studies with more than two intervention groups
For the single study with more than two intervention groups ( Pan 2017 ), we combined event outcome data reported separately for both intervention groups (LSSS < 50% KCl and LSSS ≥ 50% KCl) in our meta‐analyses. These intervention groups were combined using the methods set out in the Cochrane Handbook ( Higgins 2020a ). Another study randomised participants to receive LSSS or continue with their usual practice, after which intervention participants were again randomised to receive LSSS with or without price subsidy ( Li 2016 ). As the LSSS intervention was the same in both these arms, we only extracted and used data for the overall LSSS group (both with and without subsidy) and the usual practice group.
Cluster‐RCTs
Four included cluster‐RCTs did not report sufficient information on adjustment for clustering in the statistical analysis or results section of either the full text ( Hu 2018 ; Li 2014 ; Zhou 2013 ), or conference abstract ( Zhang 2015 ). We calculated the effective sample sizes for these trials by calculating the design effect (DE), which is 1 + (c ‐ 1) x ICC, where c is the average cluster size. Our calculations were based on an estimated intra‐cluster correlation coefficient (ICC) of 0.04, reported by a study conducted in similar trial settings in Northern China ( Neal 2017 ). For continuous data (e.g. DBP, SBP), we adjusted for the sample size only; while for dichotomous outcomes (e.g. cardiovascular events) we divided both the sample size and the number of people who experienced the event by the design effect. Where cluster‐RCTs reporting rates did not account for the effect of clustering in its analyses, we adjusted for clustering by inflating the standard errors by multiplying the standard error of the log(IRR) by the square root of the DE ( Higgins 2011 ). All the estimates from cluster‐RCTs were combined with those from RCTs that had individual group assignment in our meta‐analyses ( Higgins 2020b ).
Dealing with missing data
We contacted study authors to request any missing or unreported data, such as group means, SDs, details of attrition, or details of the type of analysis conducted (e.g. intention‐to‐treat).
In cases where there were missing data due to attrition, we used the data available to conduct available case (modified intention‐to‐treat) meta‐analyses. We assessed the extent and impact of missing data and attrition for each included study during the Risk of bias assessment.
Assessment of heterogeneity
For each meta‐analysis, we examined the forest plots visually to determine whether heterogeneity of the size and direction of treatment effect was present between studies. We used the I2 statistic, Tau2, and the Chi2 test to estimate the level of heterogeneity among the studies in each analysis. We defined substantial heterogeneity as Tau2 > 0, and either I2 > 50% or a low P value (< 0.10) in the Chi2 test. Where substantial heterogeneity was found, we noted this in the text and explored it by conducting prespecified subgroup analyses to account for potential sources of clinical heterogeneity (see section: Subgroup analysis and investigation of heterogeneity ). We also considered other potential sources of heterogeneity, for example, differences in the nature of the interventions delivered. In addition, we explored methodological sources of heterogeneity by examining studies with different levels of risk of bias in a sensitivity analysis (see section: Sensitivity analysis ). We used caution in the interpretation of results with high levels of unexplained heterogeneity. We did not perform a meta‐analysis if the I2 statistic was 90% or higher (considerable heterogeneity) ( Deeks 2020 ).
Assessment of reporting biases
Where more than 10 included studies addressed a primary outcome, we used funnel plots to assess the possibility of small‐study effects and, in the case of asymmetry, intended to consider various explanations such as publication bias, poor study design and the effect of study size ( Sterne 2017 ).
Data synthesis
All syntheses were conducted using Review Manager Web 2021 ( RevMan Web 2021 ). We used a random‐effects meta‐analysis to combine data across more than one study, as we anticipated that there may be natural heterogeneity between studies, attributable to the different study settings, intervention strategies, or both. If a study only reported an MD and variance per group for an outcome, we first calculated MDs and 95% CI for the other studies reporting on that outcome, and then combined MDs and 95% CIs from all studies in a meta‐analysis using generic inverse variance (GIV). If a study reported rate ratios or events per person‐years, from which rate ratios could be calculated (see section: Measures of treatment effect ), we also combined these rate ratios and 95% CIs in a meta‐analysis using GIV. Where studies reported event outcomes as rate ratios and risk ratios, these were combined in a meta‐analysis using GIV by using rate ratios as approximations for risk ratios.
We sought to only generate pooled estimates where data from separate studies were similar enough to be combined (see section: Assessment of heterogeneity ). Data not suitable for pooling (defined as considerable heterogeneity, I 2 ≥ 90%) in meta‐analyses were presented in forest plots without the pooled estimate, or in tables, as appropriate. Data from peer‐reviewed publications and conference abstracts were eligible for inclusion in meta‐analysis. Data from conference abstracts were identified in forest plots using footnotes. If needed, we also planned to conduct a narrative synthesis, by adopting a systematic approach to presentation, guided by the reporting guideline, Synthesis Without Meta‐analysis (SWiM) in systematic reviews ( Campbell 2020 ).
Subgroup analysis and investigation of heterogeneity
We performed subgroup analyses where data allowed using a test for interaction (i.e. heterogeneity across subgroups rather than across studies), calculating summary effect sizes for each subgroup in a univariate analysis for prespecified subgroups provided by WHO NUGAG, as follows.
All comparisons:
Study duration: short‐term (≤ 3 months) versus medium‐term (> 3 to 12 months) versus long‐term (> 12 months) Gender: male versus female versus mixed versus unknown Ethnicity: African versus Asian versus European versus mixed versus conducted in one setting (e.g. Europe), but ethnicity unspecified Blood pressure status (as defined by study authors): hypertensive versus normotensive versus hypotensive versus mixed versus unknown Baseline potassium intake: lower (urinary 24‐hour [24‐h] potassium excretion < 59 mmol [2.3 g] per day) versus higher (urinary 24‐h potassium excretion ≥ 59 mmol [2.3 g] per day) versus unknown or not reported as 24‐h excretion; based on global potassium intake estimates ( Global Dietary Database 2022 ) Baseline sodium intake: lower (urinary 24‐h sodium excretion < 172 mmol [3.95 g sodium or 9.88 g sodium chloride] per day) versus higher (urinary 24‐h sodium excretion ≥ 172 mmol [3.95 g sodium or 9.88 g sodium chloride] per day) versus unknown or not reported as 24‐h excretion; based on global sodium intake estimates ( Powles 2013 ) Iodine status (as defined by study authors): within normal ranges versus insufficient or deficient versus mixed versus unknown Type of LSSS: based on proportion of potassium chloride: ≥ 30% KCl versus < 30% KCl versus unknown versus non‐potassium containing LSSS (based on the description of a 'typical' formulation of potassium‐based salt substitutes with a 50% sodium chloride reduction in Cepanec 2017 ) LSSS implementation: discretionary only (through added LSSS in cooking and at table) versus non‐discretionary only (through consumption of manufactured products) versus discretionary and non‐discretionary Salt as fortification vehicle: using salt as fortification vehicle versus not or unknown
All comparisons (only safety outcomes):
Possible risk of hyperkalaemia versus not at risk or unclear risk of hyperkalaemia (according to the criteria and assessment in Table 6 ), regardless of heterogeneity, only in the primary analyses of the following safety outcomes: change in blood potassium, hyperkalaemia, hypokalaemia and adverse events. The WHO NUGAG made the decision to limit this subgrouping to the safety outcomes since there are no clinical justifications to expect differences in the effects of LSSS on the effectiveness outcomes in populations possibly at risk of hyperkalaemia.
Additionally, for adults:
Age: adults younger than 65 years versus 65 years and older versus mixed ages versus unknown ages Body mass index (BMI): underweight (< 18.5 kg/m 2 ) versus normal weight (18.5 to 24.9 kg/m 2 ) versus overweight (25 to 29.9 kg/m 2 ) versus obese (≥ 30 kg/m 2 ) for non‐Asian adults, or underweight (< 18.5 kg/m 2 ) versus normal weight (18.5 to 22.9 kg/m 2 ) versus overweight (23 to 24.9 kg/m 2 ) versus obese (≥ 25 kg/m 2 ) for Asian adults ( WHO 2000 )
Additionally, for children:
Age at start of study: 2 to 5 years versus 6 to 12 years versus 13 to 18 years versus mixed versus unknown
We also planned the following additional subgroup analyses, but available data did not allow these:
Term of pregnancy at start of study: first trimester versus second trimester versus third trimester versus mixed versus unknown (in comparison of pregnant women); Conditions and risk factors: renal impairment versus other NCDs versus medication use that impair potassium excretion versus mixed versus unknown (in comparison of pregnant women).
Sensitivity analysis
We conducted sensitivity analyses for primary outcomes if we had three or more studies per meta‐analysis, assessing the impact of:
Risk of bias: removing studies with a high risk of overall bias (see section: Assessment of risk of bias in included studies ); Study design: removing cluster‐RCTs
Summary of findings and assessment of the certainty of the evidence
We used the GRADE approach to judge the certainty of the evidence as it relates to the studies contributing data to the meta‐analyses for the main outcomes, using GRADEprofiler (GRADEpro) software ( GRADEpro GDT ). The GRADE approach assesses certainty as high, moderate, low, or very low according to five criteria, namely, risk of bias, inconsistency of results, indirectness, imprecision and publication bias ( Schünemann 2020 ).
For the following outcomes, we presented assessments in a GRADE Evidence Profile for Comparisons 1 and 3 (i.e. per type of population included in this review): DBP, SBP, hypertension, blood pressure control, cardiovascular events, cardiovascular mortality, blood potassium, hyperkalaemia, hypokalaemia and adverse events (other). We could not compile GRADE Evidence Profiles for the comparison in pregnant women as no eligible studies reported on outcomes in pregnant women. The effects of interventions on the outcomes included in the GRADE Evidence Profiles were interpreted according to magnitude of effect and certainty of the evidence, using GRADE guidance on informative statements to combine size and certainty of an effect ( Santesso 2020 ).
We used the approaches described below for each domain to guide our ratings and included explanations as footnotes in the GRADE Evidence Profiles.
Risk of bias
We considered downgrading if the majority (> 50%) of the weighted outcome data in a meta‐analysis were from studies at a high or unclear overall risk of bias.
Inconsistency
We considered downgrading due to either unexplained considerable (defined as I2 ≥ 90%) or substantial heterogeneity (defined as I2 between 50% to < 90%). We explored heterogeneity using prespecified subgroup analysis, and examined sensitivity analyses of study design and quality (overall risk of bias).
Indirectness
We considered downgrading based on population characteristics such as age, ethnicity, blood pressure status; or due to intervention characteristics (e.g. purpose of intervention, LSSS type), comparator, direct comparison and outcome.
Imprecision
Number of events or participants
We considered downgrading based on an insufficient number of events (i.e. 300 events) for dichotomous outcomes or sample size not meeting the optimal information size (OIS) (i.e. 400 people providing outcome measures) for continuous outcomes.
Minimally contextualised approach to GRADE ratings
In line with recent GRADE guidance ( Hultcrantz 2017 ; Zeng 2021 ), we selected a minimally contextualised approach that required us to specify thresholds for minimally important differences for key outcomes. The upper and lower limits of the 95% CIs were assessed in the same way to determine if they included the possibility of a small, trivial or no effect and an important benefit or harm ( Zeng 2021 ).
Applying this approach to rating the certainty of evidence using GRADE in relation to thresholds (other than no effect) ideally requires the use of absolute numbers ( Zeng 2021 ). To further support decision‐making by the WHO NUGAG Subgroup on Diet and Health about a population‐level intervention, we generated estimated population impacts for effect estimates and variation (95% CIs) for key clinical effectiveness outcomes when LSSS use was compared to regular salt use ( Verbeek 2021 ). This was applied to the following key clinical effectiveness outcomes in adults: change in DBP, change in SBP, cardiovascular events: non‐fatal stroke, cardiovascular events: non‐fatal acute coronary syndrome, cardiovascular mortality and stroke mortality. We used a simplified model to estimate absolute numbers from relative cardiovascular measures, as well as the absolute numbers of stroke deaths prevented or caused by changes in blood pressure, as a surrogate outcome ( Verbeek 2021 ). More detail on this simplified modelling approach can be found in Appendix 2 . | Results
Description of studies
For detailed information, see Characteristics of included studies ; Characteristics of excluded studies ; Characteristics of ongoing studies ; Characteristics of studies awaiting classification .
Results of the search
The study selection flowchart is available in Figure 1 . We screened the titles and abstracts of 6511 de‐duplicated records identified through searching electronic databases, as well as 14 records identified through handsearching of three relevant systematic reviews. We assessed the full texts of 161 records against our eligibility criteria, of which four were in Chinese and one in Portuguese; we obtained language translation assistance for assessment of these. We included 26 studies reported in 74 full‐text records ( Included studies ), of which one did not provide data that could be used in the quantitative syntheses (meta‐analyses) ( Arzilli 1986 ). Eight studies were identified as ongoing. We placed three studies under awaiting classification because we were unable to obtain further study details or data from the study authors in order to assess their eligibility for inclusion. We excluded a total of 75 full‐text records, of which 42 were duplicates ( Excluded studies ).
Included studies
Study designs
We included 26 eligible RCTs and did not identify any eligible prospective analytical cohort studies. The details of the included studies are summarised in Characteristics of included studies . Of the included trials, 16 were individually randomised trials ( Allaert 2013 ; Allaert 2017 ; Arzilli 1986 ; CSSS Collaborative Group 2007 ; Geleijnse 1994 ; Gilleran 1996 ; Kawasaki 1998 ; Mu 2003 ; Omvik 1995 ; Pan 2017 ; Pereira 2005 ; Sarkkinen 2011 ; Suppa 1988 ; Yu 2021 ; Zhao 2014 ; Zhou 2009 ) and 10 were cluster‐RCTs ( Bernabe‐Ortiz 2014 ; Chang 2006 ; Hu 2018 ; Li 2014 ; Li 2016 ; Mu 2009 ; Neal 2021 ; Toft 2020 ; Zhang 2015 ; Zhou 2013 ), including one stepped‐wedge cluster‐RCT ( Bernabe‐Ortiz 2014 ). One RCT reported a cross‐over design ( Allaert 2013 ) for which we only used first‐phase data. Twenty‐five of the eligible trials were published in peer‐reviewed journals; one cluster‐RCT was published in three separate conference abstracts and one trial as an abstract in a journal supplement. Full‐text publications of these could not be sourced after numerous attempts to contact authors.
Sample sizes and follow‐up
Nine of the 16 RCTs randomised ≤ 100 participants while seven were larger, randomising > 100 participants up to a maximum of 608 participants ( CSSS Collaborative Group 2007 ; Mu 2003 ; Pan 2017 ; Suppa 1988 ; Yu 2021 ; Zhao 2014 ; Zhou 2009 ). Fewer than half of these RCTs (n = 7) reported sample size calculations, based on expected changes of between 3 and 10 mmHg in SBP ( Allaert 2013 ; Allaert 2017 ; CSSS Collaborative Group 2007 ; Geleijnse 1994 ; Yu 2021 ; Zhao 2014 ; Zhou 2009 ); of these, two trials additionally based sample size calculations on expected changes of between 1.7 and 4 mmHg in DBP ( CSSS Collaborative Group 2007 ; Geleijnse 1994 ).
Of the cluster‐RCTs, five randomised families or households ( Hu 2018 ; Li 2014 ; Mu 2009 ; Toft 2020 ; Zhou 2013 ), ranging between 89 and 325 households, including 309 to 659 individual participants. Three cluster‐RCTs randomised villages; one conducted in Peru (N = 6 clusters; 2376 participants; Bernabe‐Ortiz 2014 ) and two conducted in China (N = 120 clusters; 2566 participants; Li 2016 and N = 600; 20,995 participants; Neal 2021 ). The cluster‐RCT by Zhang 2015 randomised nursing homes (N = 30); another randomised kitchens within a retirement home (N = 5; 2764 participants) ( Chang 2006 ). Of the ten cluster‐RCTs, only four reported appropriate sample size calculations (including an ICC) based on expected changes in blood pressure, 24‐h sodium excretion, relative reduction in stroke and sodium intake reduction ( Bernabe‐Ortiz 2014 ; Li 2016 ; Neal 2021 ; Toft 2020 , respectively); two cluster‐RCTs ( Chang 2006 ; Mu 2009 ) did not report a sample size calculation but did adjust for the effect of clustering in their analyses. One cluster‐RCT ( Hu 2018 ) reported a sample size calculation based on an expected change in SBP, but did not report incorporating an ICC in this calculation.
Eleven ( Allaert 2013 ; Allaert 2017 ; Arzilli 1986 ; CSSS Collaborative Group 2007 ; Gilleran 1996 ; Kawasaki 1998 ; Mu 2003 ; Omvik 1995 ; Sarkkinen 2011 ; Suppa 1988 ; Zhou 2009 ) of the 16 RCTs included a run‐in period, ranging between five days and six weeks. Ten RCTs tested an active LSSS intervention for a period of up to three months ( Allaert 2013 ; Allaert 2017 ; Arzilli 1986 ; Geleijnse 1994 ; Kawasaki 1998 ; Pereira 2005 ; Sarkkinen 2011 ; Suppa 1988 ; Yu 2021 ; Zhao 2014 ), and five for between three and 12 months ( CSSS Collaborative Group 2007 ; Gilleran 1996 ; Omvik 1995 ; Pan 2017 ; Zhou 2009 ). One RCT implemented a LSSS intervention for longer than 12 months ( Mu 2003 ). For the 10 cluster‐RCTs, the duration of the LSSS intervention was two months in one trial ( Li 2014 ), four months in another ( Toft 2020 ), and ranged between one and five years in the other eight cluster‐RCTs.
Settings
Four of the 16 RCTs were conducted in northern China or Tibet; most were done in rural or suburban households ( CSSS Collaborative Group 2007 ; Mu 2003 ; Zhao 2014 ; Zhou 2009 ). The remaining individually randomised trials from Brazil ( Pereira 2005 ), France ( Allaert 2013 ; Allaert 2017 ), Finland ( Sarkkinen 2011 ), India ( Yu 2021 ); Italy ( Suppa 1988 ), Japan ( Kawasaki 1998 ), Netherlands ( Geleijnse 1994 ), Norway ( Omvik 1995 ), Taiwan ( Pan 2017 ) and the UK ( Gilleran 1996 ) were conducted at household level, except for one trial in a European hospital setting ( Arzilli 1986 ). Seven cluster‐RCTs were conducted in northern China or Tibet; most were done in rural or suburban households or communities ( Hu 2018 ; Li 2014 ; Li 2016 ; Mu 2009 ; Neal 2021 ; Zhou 2013 ), with one cluster‐RCT having been conducted in nursing homes ( Zhang 2015 ). One included cluster‐RCT from Taiwan ( Chang 2006 ) was also conducted in a nursing home setting. In Peru, one stepped‐wedge cluster‐RCT was conducted in rural villages and households ( Bernabe‐Ortiz 2014 ); a cluster‐RCT in Southwestern Denmark was conducted in families ( Toft 2020 ).
Participants
We did not find any eligible studies in pregnant women. Trial participants were adults with a mean age ranging from 20 to 75.21 years and children with a mean age ranging from 8.4 to 9.5 years. Most of the included trials (15/26) were conducted in populations living in Asian countries. Eleven studies specifically included only participants with hypertension ( Allaert 2013 ; Arzilli 1986 ; Geleijnse 1994 ; Gilleran 1996 ; Mu 2003 ; Omvik 1995 ; Pereira 2005 ; Sarkkinen 2011 ; Suppa 1988 ; Yu 2021 ; Zhao 2014 ), 11 included participants with and without hypertension ( Bernabe‐Ortiz 2014 ; Chang 2006 ; CSSS Collaborative Group 2007 ; Hu 2018 ; Kawasaki 1998 ; Li 2014 ; Mu 2009 ; Neal 2021 ; Pan 2017 ; Zhou 2009 ; Zhou 2013 ), one each included only participants with normal blood pressure ( Toft 2020 ) and participants who were pre‐hypertensive ( Allaert 2017 ), and blood pressure status at baseline in the remaining studies were unknown ( Li 2016 ; Zhang 2015 ). The largest trial ( Neal 2021 ) included participants with an elevated risk of stroke and approximately 70% of participants in intervention and control groups had a history of stroke at baseline. Fifteen studies reported outcome data separately in participants with hypertension ( Allaert 2013 ; Arzilli 1986 ; Geleijnse 1994 ; Gilleran 1996 ; Hu 2018 ; Kawasaki 1998 ; Mu 2003 ; Mu 2009 ; Omvik 1995 ; Pereira 2005 ; Sarkkinen 2011 ; Suppa 1988 ; Yu 2021 ; Zhao 2014 ; Zhou 2009 ). Twelve of these studies specifically included only participants with hypertension. Three of these studies included participants with hypertension and their family members ( Hu 2018 ; Mu 2009 ; Zhou 2009 ) and one study included clinically healthy middle‐aged and elderly volunteers ( Kawasaki 1998 ); these studies additionally reported outcome data in participants with normal blood pressure separately.
One cluster‐RCT compared LSSS to regular salt in 92 children (numbers analysed) ( Toft 2020 ) by randomising families to LSSS or regular salt in bread. Seven studies included participants possibly at risk of hyperkalaemia ( Chang 2006 ; Geleijnse 1994 ; Hu 2018 ; Neal 2021 ; Yu 2021 ; Zhao 2014 ; Zhou 2013 ), four studies included participants at unclear risk of hyperkalaemia ( Arzilli 1986 ; Li 2016 ; Mu 2003 ; Zhang 2015 ) and the remaining trials included participants considered not to be at risk of hyperkalaemia. The criteria and assessments applied to classify the hyperkalaemia risk of participants per included study are summarised in Table 6 . All 26 included trials excluded participants in whom an increased intake of potassium is known to be potentially harmful, for example, people with chronic kidney disease, type 1 or 2 diabetes mellitus, impaired renal function, or those using potassium‐sparing medications.
Interventions
In 23 of the 26 studies, combinations of potassium and/or magnesium and/or calcium salts were used as sodium substitutes in the LSSS interventions, with two studies assessing a LSSS intervention consisting of NaCl combined with 3% chitosan ( Allaert 2013 ; Allaert 2017 ) and the remaining study assessing a LSSS intervention ‘naturally low in sodium’ in bread ( Toft 2020 ). Product characteristics of the latter, obtained through author correspondence ( Toft 2020 ), showed that the compound contained trace amounts of potassium (approximately 0.1 to 0.2%). Two RCTs ( Arzilli 1986 ; Mu 2003 ) and one cluster‐RCT ( Mu 2009 ) assessed LSSS with an unknown KCl content. Four cluster‐RCTs ( Chang 2006 ; Li 2014 ; Neal 2021 ; Zhang 2015 ) and six RCTs ( Geleijnse 1994 ; Gilleran 1996 ; Pan 2017 ; Pereira 2005 ; Yu 2021 ; Zhou 2009 ) assessed the effects of a LSSS intervention containing ≥ 30% KCl, while the remainder of the trials used LSSS interventions containing < 30% KCl. One RCT included two LSSS intervention arms, both including ≥ 30% KCl ( Pan 2017 ).
Most trials (22/26) administered the LSSS intervention as a discretionary intervention (at the individual, household, institution or salt supply chain level). Of these, most trials replaced the supply of regular salt with LSSS within each household, to be used at the table and during food preparation ( Bernabe‐Ortiz 2014 ; CSSS Collaborative Group 2007 ; Gilleran 1996 ; Hu 2018 ; Li 2014 ; Li 2016 ; Mu 2003 ; Mu 2009 ; Neal 2021 ; Omvik 1995 ; Pan 2017 ; Pereira 2005 ; Yu 2021 ; Zhao 2014 ; Zhou 2009 ; Zhou 2013 ). Two trials were conducted in nursing homes where LSSS was used during food preparation in the intervention kitchens of one trial ( Chang 2006 ), while the specific implementation of the LSSS was unclear in the other trial ( Zhang 2015 ). LSSS was administered as ‘added salt’ in four trials ( Allaert 2013 ; Allaert 2017 ; Arzilli 1986 ; Suppa 1988 ).
A cluster‐RCT from Peru replaced the supply of regular salt in each village with LSSS in the salt supply chain, including households, food vendors, bakeries, community kitchens and restaurants. A social marketing/education strategy promoting LSSS in each village was aimed at women who were responsible for household food preparation ( Bernabe‐Ortiz 2014 ). Another cluster‐RCT from northern China provided LSSS via the local food supply chain. LSSS was available for purchase at local village shops at either a subsidised price (same as regular salt) in half of the intervention villages, or at a regular price (approximately double that of regular salt). A community‐based health education programme to promote the use of LSSS was implemented via public announcement systems, bulletin boards, and specially developed promotional materials ( Li 2016 ).
Three RCTs incorporated LSSS into prepared test foods, such as processed main dishes, bread, cheese, luncheon meats, soups or smoked sausage ( Geleijnse 1994 ; Sarkkinen 2011 ), or seasonings containing LSSS, such as miso and soy sauce ( Kawasaki 1998 ). These trials also provided trial participants with LSSS as salt for household food preparation and for use as table salt. A fourth cluster‐RCT used bread as the exclusive method of LSSS implementation by incrementally replacing normal salt with LSSS in bread over a period of five to six weeks, with participants followed up for four months in total ( Toft 2020 ).
Trial participants in four studies were instructed not to change their dietary habits during the study period ( Allaert 2017 ; Geleijnse 1994 ; Hu 2018 ; Kawasaki 1998 ), whereas participants from two trials were advised to either reduce their salt intake ( Omvik 1995 ), or avoid salt‐rich foods ( Sarkkinen 2011 ). Two trials reported co‐interventions such as lifestyle advice about eating less fat and sugar and doing more physical exercise ( Allaert 2013 ), or a hypocaloric diet with increased physical exercise ( Pereira 2005 ).
Outcome measures
Outcomes were regarded as clinical effectiveness outcomes, or safety outcomes related to the intake of LSSS with potassium (as guided by the WHO NUGAG Subgroup on Diet and Health).
Clinical effectiveness outcomes for comparisons in both adults and children were change in DBP, change in SBP, hypertension, blood pressure control, cardiovascular events, cardiovascular mortality, all‐cause mortality, antihypertensive medication use, change in fasting blood glucose, change in blood triglycerides, change in total blood cholesterol, and change in 24‐h urinary sodium and potassium excretion. In addition, clinical effectiveness outcomes in adults only were diabetes mellitus diagnosis and change in BMI; and in children only were growth changes, bone densitometry and bone health.
Safety outcomes for comparisons in both adults and children were change in blood potassium, hyperkalaemia, hypokalaemia, adverse events, renal function and hyponatraemia.
Primary outcomes
Only one RCT ( Pan 2017 ) and one cluster‐RCT ( Chang 2006 ) did not report on changes in DBP and SBP. However, we were also unable to use relevant outcome data from four trials, due to blood pressure data being reported in a figure ( CSSS Collaborative Group 2007 (DBP only); Kawasaki 1998 (DBP and SBP)) or study authors providing insufficient information on the number of participants per treatment group ( Arzilli 1986 ; Mu 2009 ). Three RCTs reported two types of DBP and SBP outcome data, i.e. ambulatory and clinic BP measurements ( Allaert 2017 ; Omvik 1995 ; Pereira 2005 ). In order to minimise potential heterogeneity between studies, we included only the clinic BP measurements from these trials in our meta‐analyses.
Two cluster‐RCTs reported on the outcome hypertension; defined as SBP ≥ 140 mmHg, DBP ≥ 90 mmHg, or the use of blood‐pressure lowering therapy in the last two weeks ( Li 2016 ) or SBP ≥ 140 mmHg, DBP ≥ 90 mmHg, a self‐reported physician diagnosis or current treatment for hypertension ( Bernabe‐Ortiz 2014 ). Two RCTs reported on blood pressure control, defined as achieving SBP ≤ 140 mmHg and DBP ≤ 90 mmHg in both trials ( Allaert 2013 ; Zhao 2014 ).
For the outcome cardiovascular events, we extracted data related to events such as stroke ( Gilleran 1996 ; Hu 2018 ; Neal 2021 ; Pan 2017 ; Zhao 2014 ), myocardial infarction ( Hu 2018 ) or acute coronary syndrome ( Neal 2021 ), coronary heart disease or heart failure ( Li 2016 ), hypotension ( Li 2016 ), angina ( Allaert 2017 ; Omvik 1995 ), bradycardia ( Suppa 1988 ), as well as composite outcomes such as cardiovascular events ( CSSS Collaborative Group 2007 ; Zhou 2009 ) or cardiovascular symptoms ( Sarkkinen 2011 ). Three trials reported insufficient data on the number of events per group ( Li 2016 ; Suppa 1988 ) as described in Table 7 . Cardiovascular mortality was reported by one RCT ( Zhao 2014 ) and three cluster‐RCTs ( Chang 2006 ; Neal 2021 ; Zhou 2013 ); stroke mortality was reported by two cluster‐RCTs ( Neal 2021 ; Zhou 2013 ).
Six trials reported on hyperkalaemia events; two trials ( CSSS Collaborative Group 2007 ; Mu 2009 ) did not explicitly define criteria for hyperkalaemia, and one assessed self‐reported hyperkalaemia without defined criteria ( Li 2016 ). Yu 2021 and Zhang 2015 defined the outcome as a serum potassium level more than 6.5 mmol/L and 5.5 mmol/L, respectively. Neal 2021 reported on definite, probable, possible and unlikely hyperkalaemia ‐ only data on definite and probable events, defined as elevated serum potassium > 5.5 mmol/L and typical electrocardiogram (ECG) changes documented in medical notes, were extracted; events that were possible (self‐reported serum potassium > 5.5 mmol/L or ECG changes but no supporting documentation to verify) and unlikely (clinical history and documentation suggest minimal indication for the diagnosis) were excluded from our review. Insufficient information was available from one trial ( Li 2016 ) as described in Table 7 . Only one RCT reported on hypokalaemia events ( Pereira 2005 ), though criteria for the condition were not explicitly defined.
Seven trials reported changes in serum ( Allaert 2017 ; Geleijnse 1994 ; Kawasaki 1998 ; Pereira 2005 ; Zhang 2015 ) or plasma concentrations of potassium ( Omvik 1995 ; Sarkkinen 2011 ) between intervention and control groups. Data could not be extracted from one study ( Sarkkinen 2011 ), as described in Table 7 .
Secondary outcomes
Four RCTs ( CSSS Collaborative Group 2007 ; Pan 2017 ; Zhao 2014 ; Zhou 2013 ), and three cluster‐RCTs ( Chang 2006 ; Neal 2021 ; Zhang 2015 ) reported on all‐cause mortality.
Three trials reported on the occurrence of serious adverse events (not defined) during the study period ( Bernabe‐Ortiz 2014 ; CSSS Collaborative Group 2007 ; Pan 2017 ). Other adverse event outcomes extracted included gastrointestinal symptoms (stomach ache or abdominal distension) ( Sarkkinen 2011 ; Zhao 2014 ), hypercalcemia and renal calculi ( Mu 2009 ), appendicitis, nephritis, nephrosis ( Hu 2018 ), influenza ( Allaert 2017 ), respiratory symptoms ( Sarkkinen 2011 ) and dorsalgia ( Allaert 2017 ). Suppa 1988 included self‐reported adverse events including asthenia, bradycardia, drowsiness, insomnia, decreased libido and depression, but did not report the number of events per group. None of the included trials reported adverse events such as nausea or vomiting.
Five trials reported on changes in antihypertensive medication use ( Bernabe‐Ortiz 2014 ; Hu 2018 ; Li 2016 ; Zhao 2014 ; Zhou 2013 ). Four trials reported the number of participants using antihypertensive medications: two trials reported on this outcome in participants with hypertension ( Hu 2018 included participants with hypertension and their family members, but only reported on the former; Zhao 2014 included only participants with hypertension) and one trial each in participants with unknown hypertensive status ( Li 2016 ) and participants using hypotensive medication at baseline ( Zhou 2013 ). Bernabe‐Ortiz 2014 assessed participant‐reported changes in medication use for hypertension and type 2 diabetes mellitus combined.
Three trials reported changes in serum creatinine ( Omvik 1995 ; Pereira 2005 ; Zhang 2015 ). One cluster‐RCT ( Li 2016 ) reported the mean urinary albumin‐to‐creatinine ratios of participants in the intervention and control groups, as well as the proportion of participants with albuminuria (including micro‐ and macro‐albuminuria) in both groups.
Four studies reported changes in BMI ( Li 2016 ; Pereira 2005 ; Sarkkinen 2011 ; Toft 2020 ), while two reported changes in fasting blood glucose concentrations ( Toft 2020 ; Zhou 2013 ). Five studies reported changes in blood triglycerides ( Gilleran 1996 ; Kawasaki 1998 ; Pereira 2005 ; Toft 2020 ; Zhou 2009 ) and total blood cholesterol ( Geleijnse 1994 ; Gilleran 1996 ; Kawasaki 1998 ; Toft 2020 ; Zhou 2009 ), respectively.
A total of 12 studies collected 24‐h urine samples and reported on 24‐h urinary sodium excretion ( Bernabe‐Ortiz 2014 ; Geleijnse 1994 ; Gilleran 1996 ; Kawasaki 1998 ; Li 2016 ; Neal 2021 ; Omvik 1995 ; Sarkkinen 2011 ; Suppa 1988 ; Toft 2020 ; Yu 2021 ; Zhou 2009 ); the same studies reported on 24‐h urinary potassium excretion from 24‐h urine samples.
None of the included studies reported on the outcomes of diabetes mellitus diagnosis or hyponatremia.
Funding sources and conflicts of interest
Nine of the 26 studies did not disclose their funding source(s). The remaining studies were funded as follows:
nine public/non‐commercial funding only, including government bodies and research institutions ( Bernabe‐Ortiz 2014 ; CSSS Collaborative Group 2007 ; Li 2014 ; Li 2016 ; Mu 2009 ; Pan 2017 ; Toft 2020 ; Zhao 2014 ; Zhou 2013 ); four public/non‐commercial funding plus LSSS provided for the trial by LSSS manufacturer ( Geleijnse 1994 ; Neal 2021 ; Omvik 1995 ; Yu 2021 ); two commercial funding by LSSS manufacturers plus LSSS provided for the trial ( Chang 2006 ; Sarkkinen 2011 ); one commercial funding for the study from food industry research fund plus LSSS provided for the trial by LSSS manufacturer ( Hu 2018 ); and one LSSS provided for the trial by the LSSS manufacturer ( Kawasaki 1998 ).
The authors of 13 included studies did not report on potential conflicts of interest (COI), whereas those from 13 studies did. Of these 13 studies, authors of eleven declared that they had no potential COI ( Bernabe‐Ortiz 2014 ; Chang 2006 ; CSSS Collaborative Group 2007 ; Mu 2009 ; Pan 2017 ; Pereira 2005 ; Toft 2020 ; Yu 2021 ; Zhao 2014 ; Zhou 2009 ; Zhou 2013 ), whereas the author of one study declared a potential conflict of interest as the chair of the Australian Division of World Action on Salt and Health ( Li 2016 ), and some members of the author team of a large cluster‐RCT declared potential conflicts of interest, while the remaining members declared no conflict ( Neal 2021 ).
Excluded studies
We contacted nine corresponding authors for further information to assist with study inclusion. We excluded 32 studies (33 full‐text records) due to the following reasons:
Wrong study design (single‐arm trial): 5
Wrong study design (commentary/letter): 3
Wrong study design (case report/study): 2
Wrong study design (case series): 2
Wrong study design (non‐randomised trial): 2
Wrong study design (quasi‐randomised trial): 2
Wrong study design (cross‐over with first phase data not available): 1
Wrong type of intervention (multifactorial): 4
Wrong type of intervention (dietary): 2
Wrong type of intervention (LSSS administered as supplement): 1
Wrong type of intervention (salt restriction education): 1
Wrong comparator: 4
Wrong outcome (sensory/organoleptic): 2
Wrong outcome (sodium concentration of homemade food): 1
The Characteristics of excluded studies section illustrates these 32 studies with reasons for exclusion. We also excluded 42 duplicates at the full‐text screening stage. The remaining references where we could not reach the authors or information provided was not sufficient to make a clear judgement (n = 3) were included as Studies awaiting classification . The eight ongoing studies are detailed in Characteristics of ongoing studies .
Risk of bias in included studies
The Characteristics of included studies provides details of the judgements for each risk of bias domain per study. Figure 2 presents a summary of the risk of bias judgements for each included study and Figure 3 the summary of the judgements per risk of bias domain.
Allocation
Random sequence generation
Fifteen trials described adequate methods of random sequence generation and were at low risk of selection bias ( Bernabe‐Ortiz 2014 ; Chang 2006 ; CSSS Collaborative Group 2007 ; Geleijnse 1994 ; Hu 2018 ; Li 2014 ; Li 2016 ; Mu 2009 ; Neal 2021 ; Pan 2017 ; Toft 2020 ; Yu 2021 ; Zhao 2014 ; Zhou 2009 ; Zhou 2013 ). Eleven trials did not report how the random sequence had been generated and were at unclear risk of selection bias.
Allocation concealment
Eleven trials described methods of allocation concealment judged to be at low risk of selection bias ( Allaert 2017 ; Bernabe‐Ortiz 2014 ; CSSS Collaborative Group 2007 ; Hu 2018 ; Li 2016 ; Neal 2021 ; Pan 2017 ; Toft 2020 ; Yu 2021 ; Zhao 2014 ; Zhou 2009 ). The remaining fifteen did not report sufficient information on allocation concealment, and were at unclear risk of selection bias.
Blinding
Blinding of participants and personnel (performance bias)
Performance bias was assessed as unlikely in fifteen trials, while eleven trials had an unclear risk of bias, mainly due to insufficient information on blinding of either participants or personnel ( Allaert 2013 ; Arzilli 1986 ; Bernabe‐Ortiz 2014 ; Chang 2006 ; Kawasaki 1998 ; Li 2014 ; Li 2016 ; Mu 2003 ; Mu 2009 ; Suppa 1988 ; Zhang 2015 ).
Blinding of outcome assessment (detection bias)
Nineteen trials were assessed to have a low risk of detection bias, while seven had an unclear risk mainly due to insufficient information on blinding of outcome assessors ( Li 2016 ; Mu 2003 ; Mu 2009 ; Pereira 2005 ; Suppa 1988 ; Zhang 2015 ; Zhou 2013 ). Some of these trials reported the measurement of blood pressure with non‐automatic devices, increasing the likelihood of detection bias ( Mu 2003 ; Pereira 2005 ; Suppa 1988 ).
Incomplete outcome data
Four trials were at a high risk of bias due to high overall or differential attrition (≥ 10%) ( Geleijnse 1994 ; Gilleran 1996 ; Hu 2018 ; Mu 2003 ). Fourteen studies were at low risk because they reported low overall or differential attrition ( Allaert 2013 ; Allaert 2017 ; Chang 2006 ; CSSS Collaborative Group 2007 ; Kawasaki 1998 ; Li 2014 ; Neal 2021 ; Omvik 1995 ; Sarkkinen 2011 ; Suppa 1988 ; Toft 2020 ; Yu 2021 ; Zhou 2009 ; Zhou 2013 ). Two studies reported high attrition; however intention‐to‐treat analyses (ITT) were conducted using multiple imputation in one study ( Zhao 2014 ) and the last‐observation‐carried‐forward method in the other study ( Pan 2017 ); therefore, these were judged as being at low risk. High attrition was reported at some time points in the stepped‐wedge cluster‐RCT; however, it was unclear whether any data were imputed and therefore this study had an unclear risk ( Bernabe‐Ortiz 2014 ). Five additional studies were at unclear risk of bias since insufficient information on attrition was provided.
Selective reporting
Two trials were assessed as being at high risk of bias. Substudies of both trials reported outcomes not prespecified in the study protocol ( CSSS Collaborative Group 2007 ; Li 2016 ). Two trials at low risk of bias reported outcomes prespecified in the study protocol; the remaining studies had an unclear risk due to inadequate reporting of all prespecified outcomes, or the unavailability of the study protocol.
Other potential sources of bias
One trial was assessed as being high risk for misclassification bias due to limited information for adjudication of clinical outcome events, as well as self‐reported potential hyperkalaemia events ( Neal 2021 ). Two trials were assessed as being at unclear risk. In the stepped‐wedge trial, a reduction in blood pressure was observed in some clusters (villages) before the intervention ( Bernabe‐Ortiz 2014 ) while, in another cluster‐RCT, there was a considerable risk of contamination in the intervention group due to the unlimited availability of condiments and spices such as soy sauce and monosodium glutamate ( Chang 2006 ). No potential sources of bias were identified in the remaining studies.
Additional domains assessed for cluster‐RCTs
Recruitment bias
One trial had a high risk for recruitment bias since a number of participants were recruited after the clusters (kitchens) were randomised ( Chang 2006 ). Six trials reported recruitment of participants before randomisation of clusters and were therefore considered at low risk ( Bernabe‐Ortiz 2014 ; Hu 2018 ; Li 2014 ; Li 2016 ; Neal 2021 ; Toft 2020 ). Three trials did not provide sufficient information regarding the timing of recruitment.
Comparability with individually randomised trials (RCTs)
Six trials at low risk of bias reported effect estimates comparable to those reported by similar RCTs ( Bernabe‐Ortiz 2014 ; Chang 2006 ; Hu 2018 ; Li 2014 ; Zhang 2015 ; Zhou 2013 ), whereas it was not possible to compare the effect estimates in four trials.
Baseline imbalance
Three trials reported no differences in baseline characteristics of the participants in the intervention and control groups ( Hu 2018 ; Li 2014 ; Neal 2021 ), while two adjusted for baseline differences and were therefore also considered to be at low risk ( Bernabe‐Ortiz 2014 ; Zhou 2013 ). Five studies that reported insufficient information regarding baseline characteristics were at unclear risk of bias.
Loss of clusters
One trial reported an analysis without outcome data from almost one‐third of their included clusters, thus was considered at high risk of attrition bias ( Zhang 2015 ). Seven trials had a low risk ( Bernabe‐Ortiz 2014 ; Chang 2006 ; Hu 2018 ; Li 2014 ; Li 2016 ; Neal 2021 ; Toft 2020 ) while two did not provide sufficient information on the number of clusters (families) lost to follow‐up ( Mu 2009 ), or the reasons for loss to follow‐up ( Zhou 2013 ).
Incorrect analysis
Three trials did not report adjustment for clustering in their analysis, thus were considered at high risk of bias ( Hu 2018 ; Li 2014 ; Zhou 2013 ). Six trials were at low risk; five reported statistical adjustment for clustering ( Chang 2006 ; Li 2016 ; Mu 2009 ; Neal 2021 ; Toft 2020 ) and one also accounted for time trends in their analysis ( Bernabe‐Ortiz 2014 ). One trial that reported insufficient information regarding statistical adjustment for clusters was considered at unclear risk.
Overall risk of bias
Three RCTs and three cluster‐RCTs had a high overall risk of bias due to attrition ( Geleijnse 1994 ; Gilleran 1996 ; Hu 2018 ; Mu 2003 ; Zhang 2015 ) or recruitment bias ( Chang 2006 ). Six RCTs had a low overall risk ( Allaert 2017 ; CSSS Collaborative Group 2007 ; Pan 2017 ; Yu 2021 ; Zhao 2014 ; Zhou 2009 ) and two cluster‐RCTs ( Li 2014 ; Neal 2021 ) had a low overall risk, while the remaining seven RCTs and five cluster‐RCTs had an unclear risk mainly due to uncertainty about selection, attrition and recruitment bias ( Allaert 2013 ; Arzilli 1986 ; Bernabe‐Ortiz 2014 ; Kawasaki 1998 ; Li 2016 ; Mu 2009 ; Omvik 1995 ; Pereira 2005 ; Sarkkinen 2011 ; Suppa 1988 ; Toft 2020 ; Zhou 2013 ).
Effects of interventions
See: Table 1 ; Table 2
See: Table 1 ; Table 2 .
Comparison 1. Low‐sodium salt substitutes versus regular salt or no active intervention in adults
Table 1 presents the effects of LSSS compared to regular salt or no active intervention in adult participants on changes in DBP and SBP; as well as the number of participants per group with hypertension, blood pressure control, cardiovascular events (various events and non‐fatal stroke); the rate ratio of participants in the intervention group with non‐fatal acute coronary syndrome (ACS), cardiovascular mortality and stroke mortality; changes in blood potassium; and the number of participants per group with hyperkalaemia, hypokalaemia and other adverse events.
A total of 26 RCTs, 16 randomising individual participants and 10 randomising clusters, reporting on 34,961 adult participants, were included in this comparison. Key details about studies in this comparison, including study design, setting and overall risk of bias; characteristics of the intervention, comparator, population and outcomes; method of synthesis, and time points of measurement are included in the Overview of Synthesis and Included Studies (OSIS) table ( Table 8 ).
Primary outcomes
Change in diastolic blood pressure (DBP, mmHg)
GRADE assessment suggests that LSSS probably reduce DBP slightly, on average, compared to regular salt in adults (moderate‐certainty evidence, downgraded once for inconsistency).
Average reductions in DBP ranged from 0.6 mmHg to 11.33 mmHg with LSSS and from a reduction of 7 mmHg to an increase of 2.6 mmHg with regular salt in the 19 trials that reported this outcome. Two trials did not have average changes per group available: Li 2016 reported end values and no baseline measures; Neal 2021 reported only mean differences between groups. The meta‐analysis showed small, important effects on DBP on average between LSSS and regular salt groups (MD ‐2.43 mmHg, 95% confidence interval (CI) ‐3.50 to ‐1.36, I 2 = 88%, 20,830 participants, 19 RCTs, moderate‐certainty evidence, Analysis 1.1 ). Follow‐up ranged from four weeks to 60 months.
This small yet important effect was confirmed by sensitivity analyses, including only trials with 'low' or 'unclear' overall risk of bias ( Analysis 1.12 ) and including only trials randomising participants at the individual level, i.e. excluding cluster‐RCTs ( Analysis 1.13 ). This direction of effect was also seen in a large stepped‐wedge cluster trial following 2376 participants over 30 months (unclear overall risk of bias), though the magnitude of the effect was considerably diminished ( Analysis 1.14 ).
The estimated population impact, as described in Appendix 2 , indicated that the effect of the primary meta‐analysis ( Analysis 1.1 ) corresponded to an estimated 60 (ranging from 35 to 83) stroke deaths prevented per 100,000 persons, aged 50 years and older, per year.
Subgroup analyses were undertaken for this outcome due to the presence of substantial heterogeneity. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping by study duration ( Analysis 1.2 ) suggests there may be no important differences in average effects between subgroups. Subgrouping participants by age ( Analysis 1.3 ), gender ( Analysis 1.4 ), ethnicity ( Analysis 1.5 ), BMI ( Analysis 1.6 ), blood pressure status ( Analysis 1.7 ) or baseline 24‐h urinary sodium ( Analysis 1.10 ) or potassium ( Analysis 1.11 ) excretion further suggested there may be no important clinical differences in average effects between subgroups. Subgrouping by intervention characteristics also suggests there may be no important differences in average effects between the manner of LSSS implementation ( Analysis 1.8 ) or the type of LSSS ( Analysis 1.9 ).
Five trials reported this outcome in an unusable format (e.g. reported only in a figure from which we could not extract exact values, or did not report standard deviations and participant numbers along with mean change), and usable data were not provided when requested from authors ( Table 7 ). The funnel plot ( Figure 4 ) shows that most trials had similar effect sizes despite varying inter‐trial standard errors, thereby limiting the ability to assess asymmetry. The mean difference in the fixed‐effect model, an analytical approach that gives less weight to small studies, for the primary analysis (‐2.27 mmHg, 95% CI ‐2.56 to ‐1.98) was similar to the mean difference when using the random‐effects model, which gives more weight to smaller studies, for the primary analysis (‐2.43 mmHg, 95% CI ‐3.50 to ‐1.36, Analysis 1.1 ). This suggests that any small‐study effects have little impact on the intervention effect estimate.
Change in systolic blood pressure (SBP, mmHg)
GRADE assessment suggests that LSSS probably reduce SBP slightly, on average, compared to regular salt in adults (moderate‐certainty evidence, downgraded once for inconsistency).
Average reductions in SBP ranged from 1.5 mmHg to 15.25 mmHg with LSSS and from a reduction of 6.8 mmHg to an increase of 4 mmHg with regular salt in the 20 trials that reported this outcome. Three trials did not have average changes per group available: Li 2016 reported end values and no baseline measures; CSSS Collaborative Group 2007 and Neal 2021 reported only mean differences between groups. The meta‐analysis showed small, important effects on SBP on average between LSSS and regular salt groups (MD ‐4.76 mmHg, 95% CI ‐6.01 to ‐3.50, I 2 = 78%, 21,414 participants, 20 RCTs, moderate‐certainty evidence, Analysis 1.15 ). Follow‐up ranged from four weeks to 60 months.
This small yet important effect was confirmed by sensitivity analyses, including only trials with 'low' or 'unclear' overall risk of bias ( Analysis 1.26 ) and including only trials randomising participants at the individual level, i.e. excluding cluster‐RCTs ( Analysis 1.27 ). This direction of effect was also seen in the stepped‐wedge cluster trial that followed 2376 participants over 30 months (unclear overall risk of bias), though the magnitude of the effect was considerably diminished ( Analysis 1.28 ).
The estimated population impact, as described in Appendix 2 , indicated that the effect of the primary meta‐analysis ( Analysis 1.15 ) corresponded to an estimated 53 (ranging from 40 to 65) stroke deaths prevented per 100,000 persons, aged 50 years and older, per year.
Subgroup analyses were undertaken for this outcome due to the presence of substantial heterogeneity. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping by study duration ( Analysis 1.16 ), and subgrouping participants by age ( Analysis 1.17 ), gender ( Analysis 1.18 ), ethnicity ( Analysis 1.19 ), BMI ( Analysis 1.20 ), blood pressure status ( Analysis 1.21 ) or baseline 24‐h urinary sodium ( Analysis 1.24 ) or potassium ( Analysis 1.25 ) excretion suggests there may be no important clinical differences in average effects between subgroups. Subgrouping by intervention characteristics also suggested there may be no important differences in average effects between the manner of LSSS implementation ( Analysis 1.22 ) or the type of LSSS ( Analysis 1.23 ).
Four trials reported this outcome in an unusable format (e.g. reported only between‐group P values or did not report standard deviations and participant numbers along with mean change), and usable data were not provided when requested from authors ( Table 7 ). The funnel plot ( Figure 5 ) shows that most trials had similar effect sizes and standard errors, with some outliers, thereby limiting the ability to assess asymmetry. The mean difference in the fixed‐effect model for the primary analysis (‐3.92 mmHg, 95% CI ‐4.37 to ‐3.47) was similar to the mean difference when using the random‐effects model for the primary analysis (‐4.76 mmHg, 95% CI ‐6.01 to ‐3.50, Analysis 1.15 ). This suggests that any small‐study effects have little impact on the intervention effect estimate.
Hypertension
GRADE assessment suggests that, on average, LSSS may result in little to no difference in hypertension in the adult population, when compared to regular salt (low‐certainty evidence, downgraded once for risk of bias and once for imprecision).
One study following participants for 18 months reported on this outcome (risk ratio (RR) 0.97, 95% CI 0.90 to 1.03, 2566 participants, 1 RCT, low‐certainty evidence, Analysis 1.29 ), with 725 participants in the LSSS group and 738 participants in the regular salt group having prevalent hypertension at the end of the study. The absolute effect for hypertension was 17 fewer per 1000 (95% CI 58 fewer to 17 more). The stepped‐wedge cluster trial (unclear overall risk of bias) reported on incident hypertension in 1914 participants represented by 2712.3 person‐years at risk in the LSSS group and 1961.1 person‐years at risk in the regular salt group, and found a reduction in hypertension with LSSS compared to regular salt ( Analysis 1.30 ).
Blood pressure control
GRADE assessment suggests that the evidence is very uncertain about the effect of LSSS on blood pressure control in adults, when compared to regular salt (very low‐certainty evidence, downgraded once for risk of bias, once for indirectness and once for imprecision).
Two small studies reported on this outcome (RR 2.12, 95% CI 1.32 to 3.41, I 2 = 0%, 253 participants, 2 RCTs, very low‐certainty evidence, Analysis 1.31 ), with the number of participants in the LSSS group achieving blood pressure control ranging from 16 to 19 and participants in the regular salt group achieving blood pressure control ranging from seven to 10. The absolute effect for blood pressure control was 143 more per 1000 (95% CI 41 more to 308 more). The two trials reporting this outcome had follow‐up at eight weeks and three months.
Cardiovascular events: various
GRADE assessment suggests that the evidence is very uncertain about the effect of LSSS interventions on various other cardiovascular events when compared to regular salt in the adult population (very low‐certainty evidence, downgraded once for indirectness and twice for imprecision).
It should be noted that a very small number of participants presented with various other cardiovascular events in both groups for the five trials reporting this outcome. Event numbers ranged from zero to eight with LSSS and from zero to five with regular salt. The meta‐analysis of the RR was 1.22 (95% CI 0.49 to 3.04, I 2 = 0%, 982 participants, 5 RCTs, very low‐certainty evidence, Analysis 1.32 ) when comparing LSSS and regular salt. The absolute effect for various other cardiovascular events was 357 more per 100,000 (95% CI 828 fewer to 3310 more). Two trials reporting this outcome had follow‐up at ≤ 3 months, while three followed participants for > 3 to 12 months.
Two trials reported this outcome in an unusable format (i.e. numbers of events per group and numbers of participants per group not reported), and usable data were not provided when requested from authors ( Table 7 ).
Cardiovascular events: non‐fatal stroke
GRADE assessment for this outcome suggests that, on average, LSSS probably reduce non‐fatal stroke events slightly in adults, when compared to regular salt (moderate‐certainty evidence, downgraded once for indirectness).
The number of participants with non‐fatal stroke in two small trials ( Gilleran 1996 ; Pan 2017 ) reporting on this outcome at ≤ 3 months and > 3 to 12 months, respectively, ranged from zero to four with LSSS and one participant each with regular salt. The third, a large cluster‐RCT ( Neal 2021 ) that followed participants for a mean of 4.75 years, reported rates of events: 22.36 events per 1000 person‐years in the LSSS group and 24.86 events per 1000 person‐years in the regular salt group (rate ratio 0.90, 95% CI 0.80 to 1.01). The meta‐analysis combining these data as risk ratios resulted in an RR of 0.90 (95% CI 0.80 to 1.01, I 2 = 0%, 21,250 participants, 3 RCTs, moderate‐certainty evidence, Analysis 1.33 ) when comparing LSSS with regular salt. This result translates to an absolute effect for non‐fatal stroke of 20 fewer per 100,000 (95% CI 40 fewer to 2 more). Since this pooled effect was driven by a large secondary prevention trial with a preponderance of participants with previous stroke, we downgraded once for indirectness.
The estimated population impact, as described in Appendix 2 , indicated that the effect of the primary meta‐analysis ( Analysis 1.33 ) corresponded to an estimated 10 non‐fatal strokes prevented (ranging from 21 prevented to 1 caused) per 100,000 persons per year.
Sensitivity analyses, including only trials with 'low' or 'unclear' overall risk of bias ( Analysis 1.34 ) and including only trials randomising participants at the individual level, i.e. excluding cluster‐RCTs ( Analysis 1.35 ), did not reflect this benefit with LSSS; instead showing highly imprecise effects of little to no effect, or harm.
One trial reported this outcome in an unusable format (i.e. numbers of participants per group not reported), and usable data were not provided when requested from authors ( Table 7 ).
Cardiovascular events: non‐fatal acute coronary syndrome
GRADE assessment suggests that LSSS probably reduce non‐fatal ACS events slightly, on average, when compared to regular salt in adults (moderate‐certainty evidence, downgraded once for indirectness).
A single large cluster‐RCT contributed data to this outcome at a mean follow‐up of 4.75 years, reporting rates of 3.79 events per 1000 person‐years in the LSSS group and 5.12 events per 1000 person‐years in the regular salt group. The rate ratio was 0.70 (95% CI 0.52 to 0.94, 20,995 participants, 1 RCT, moderate‐certainty evidence, Analysis 1.36 ) and the absolute effect for non‐fatal acute coronary syndrome was 150 fewer per 100,000 person‐years (95% CI 250 fewer to 30 fewer), when comparing LSSS with regular salt in this large secondary prevention trial in which most of the participants had a history of previous stroke. This setting limited generalisability to the general adult population, and the evidence was consequently downgraded once for indirectness.
The estimated population impact, as described in Appendix 2 , indicated that the effect of the primary analysis ( Analysis 1.36 ) corresponded to an estimated 50 (ranging from 10 to 80) non‐fatal ACS events prevented per 100,000 persons per year.
Cardiovascular mortality
GRADE assessment of this outcome suggests that LSSS probably reduce cardiovascular mortality slightly, on average, in adults when compared to regular salt (moderate‐certainty evidence, downgraded once for indirectness).
The number of cardiovascular mortality events per 1000 person‐years in the three trials reporting on this outcome ranged from 4.53 to 22.94 in the LSSS groups and 7.81 to 26.30 in the regular salt groups. The meta‐analysis comparing LSSS with regular salt resulted in a rate ratio of 0.77 (95% CI 0.60 to 1.00, I 2 = 35%, 23,200 participants, 3 RCTs, moderate‐certainty evidence, Analysis 1.37 ). The absolute effect for cardiovascular mortality was 180 fewer per 100,000 person‐years (95% CI 310 fewer to 0 fewer). We downgraded this finding once for indirectness since the pooled effect was driven by the secondary prevention trial including a large proportion of participants with previous stroke ( Neal 2021 ). Two trials reporting on this outcome had a mean follow‐up of between 2.6 and 4.75 years ( Chang 2006 ; Neal 2021 ) while a third ( Zhou 2013 ) reported on this outcome following three years of active intervention and ten years follow‐up.
The estimated population impact, as described in Appendix 2 , indicated that the effect of the primary meta‐analysis ( Analysis 1.37 ) corresponded to an estimated 53 cardiovascular deaths prevented (ranging from 92 prevented to none prevented or caused) per 100,000 persons per year.
A sensitivity analysis including only trials with 'low' or 'unclear' overall risk of bias ( Analysis 1.38 ) confirmed this effect.
Stroke mortality
GRADE assessment suggests that the evidence is very uncertain about the effect of LSSS on stroke mortality in adults, when compared to regular salt (very low‐certainty evidence, downgraded once for indirectness and twice for imprecision).
The number of stroke mortality events per 1000 person‐years in the two trials reporting on this outcome ranged from 2.01 to 6.78 in the LSSS groups and 5.85 to 8.79 in the regular salt groups. The meta‐analysis comparing LSSS with regular salt resulted in a rate ratio 0.64 (95% CI 0.33 to 1.25, I 2 = 45%, 21,423 participants, 2 RCTs, very low‐certainty evidence, Analysis 1.39 ). The absolute effect for stroke mortality was 145 fewer per 100,000 person‐years (95% CI 270 fewer to 100 more). The pooled effect was driven to a considerable extent by a large secondary prevention trial including a large proportion of participants with previous stroke, resulting in limited generalisability to the general adult population. This trial ( Neal 2021 ) had a mean follow‐up of 4.75 years, while the second trial reporting on this outcome ( Zhou 2013 ) had a follow‐up of three years of active intervention and ten years thereafter.
The estimated population impact, as described in Appendix 2 , indicated that the effect of the primary meta‐analysis ( Analysis 1.39 ) corresponded to an estimated 28 stroke deaths prevented (ranging from 53 prevented to 20 caused) per 100,000 persons per year.
Change in blood potassium (mmol/L)
GRADE assessment suggests that, on average, LSSS probably increase blood potassium slightly compared to regular salt in the adult population (moderate‐certainty evidence, downgraded once for risk of bias).
Average changes in blood potassium ranged from a reduction of 0.2 mmol/L to an increase of 0.38 mmol/L with LSSS and from a reduction of 0.2 mmol/L to an increase of 0.3 mmol/L with regular salt in the six trials that reported this outcome. The meta‐analysis showed small, important effects on blood potassium on average between LSSS and regular salt groups (MD 0.12, 95% CI 0.07 to 0.18, I 2 = 0%, 784 participants, 6 RCTs, moderate‐certainty evidence, Analysis 1.40 ).
This small yet important effect was confirmed by sensitivity analyses, including only trials with 'low' or 'unclear' overall risk of bias ( Analysis 1.42 ) and including only trials randomising participants at the individual level, i.e. excluding cluster‐RCTs ( Analysis 1.43 ). The trials reporting on this outcome reported results at 56 days, five weeks, 12 weeks and between one and 1.5 years each, while two trials reported results at approximately six months.
Subgroup analyses were undertaken for this outcome to explore whether there were differences in effects in subgroups based on hyperkalaemia risk. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping participants by risk of hyperkalaemia as per Table 6 suggests there may be no important clinical differences in average effects between participants not at risk, at unclear risk and those at possible risk of hyperkalaemia ( Analysis 1.41 ).
One trial reported this outcome in an unusable format (i.e. reported change and significance of change in the control group only), and usable data were not provided when requested from authors ( Table 7 ).
Hyperkalaemia
GRADE assessment suggests that, on average, LSSS likely result in little to no difference in hyperkalaemia in the adult population when compared to regular salt (moderate‐certainty evidence, downgraded once for risk of bias).
It should be noted, however, that a very small number of participants presented with hyperkalaemia in both groups across the trials that reported this outcome. The number of participants with hyperkalaemia in the five trials reporting this outcome ranged from zero to 11 with LSSS and from zero to nine with regular salt. The meta‐analysis of the RR was 1.04 (95% CI 0.46 to 2.38, I 2 = 0%, 22,849 participants, 5 RCTs, moderate‐certainty evidence, Analysis 1.44 ) when comparing LSSS to regular salt. The absolute effect for hyperkalaemia was 4 more per 100,000 (95% CI 47 fewer to 121 more).
A sensitivity analysis including only trials with 'low' or 'unclear' overall risk of bias ( Analysis 1.46 ) confirmed this effect, though this result was highly imprecise. A sensitivity analysis including only trials randomising participants at the individual level, i.e. excluding cluster‐RCTs ( Analysis 1.47 ) was not informative due to zero events in both trial arms. These five trials reported results after three months, 12 months, one to 1.5 years, 2 years and a mean of 4.75 years follow‐up.
Subgroup analyses were undertaken for this outcome to explore whether there were differences in effects in subgroups based on hyperkalaemia risk. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping participants by risk of hyperkalaemia as per Table 6 suggests there may be no important clinical differences in average effects between participants not at risk, at unclear risk and those at possible risk, of hyperkalaemia ( Analysis 1.45 ).
Hypokalaemia
GRADE assessment of this outcome suggests that the evidence is very uncertain about the effects of LSSS on hypokalaemia when compared to regular salt in the adult population (very low‐certainty evidence, downgraded once for risk of bias and twice for indirectness).
A single, small trial ( Pereira 2005 ) reported no hypokalaemia events in either trial arm comparing LSSS and regular salt in young participants with hypertension requiring potassium supplementation due to the use of potassium‐depleting diuretics (RR and 95% CI not estimable, 22 participants, 1 RCT, very low‐certainty evidence, Analysis 1.48 ). This study reported outcomes at 12 weeks.
Secondary outcomes
For comparison 1, no studies reported on diabetes mellitus diagnosis or hyponatraemia.
All‐cause mortality
The number of all‐cause mortality events in two trials ( CSSS Collaborative Group 2007 ; Pan 2017 ) reporting on this outcome at > 3 to 12 months ranged from three to four with LSSS and one to four with regular salt. Three additional trials ( Chang 2006 ; Neal 2021 ; Zhou 2013 ) reported rates of events and rate ratios; these ranged from 11.08 to 93.45 events per 1000 person‐years in the LSSS groups and 13.66 to 101.29 events per 1000 person‐years in the regular salt groups and corresponded to rate ratios (95% CIs) of 0.92 (0.77 to 1.10), 0.88 (0.82 to 0.95) and 0.81 (0.46 to 1.42), respectively. The meta‐analysis combining these data as risk ratios resulted in an RR of 0.89 (95% CI 0.83 to 0.95, I 2 = 0%, 24,005 participants, 5 RCTs, Analysis 1.49 ) when comparing LSSS with regular salt.
Adverse events (other)
GRADE assessment suggests that the evidence is very uncertain about the effect of LSSS on other adverse events when compared to regular salt in adults (very low‐certainty evidence, downgraded once for risk of bias, once for inconsistency and once for imprecision).
The number of participants with other adverse events in the eight trials reporting this outcome ranged from zero to 17 with LSSS and from zero to seven with regular salt. The events reported were highly diverse and not suitable for pooling in a meta‐analysis (2109 participants, 8 RCTs, very low‐certainty evidence, Analysis 1.50 ). Four trials reporting on other adverse events reported these at ≤ 3 months, three reported on this outcome at > 3 to 12 months and one trial reported other adverse events at > 12 months.
Subgroup analyses were undertaken for this outcome to explore whether there were differences in effects in subgroups based on hyperkalaemia risk. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping participants by risk of hyperkalaemia as per Table 6 suggests there may be no important clinical differences in average effects between participants not at risk, and those at possible risk, of hyperkalaemia ( Analysis 1.51 ).
One trial reported this outcome in an unusable format (i.e. numbers of events per group not reported), and usable data were not provided when requested from authors ( Table 7 ).
Antihypertensive medication use
The number of participants using antihypertensive medication across the four trials reporting this outcome ranged from 34 to 246 with LSSS and from 52 to 267 with regular salt. The meta‐analysis of the RR of the number of participants using antihypertensive medication was 0.80 (95% CI 0.67 to 0.95, I 2 = 53%, 3301 participants, 4 RCTs, Analysis 1.52 ) when comparing LSSS and regular salt groups. The one individually randomised trial that reported this outcome had follow‐up at three months; the three cluster trials reporting this outcome had follow‐up at 12, 18 and 36 months. The stepped‐wedge cluster trial (unclear overall risk of bias) reported no changes in medication use, with 10.5% at baseline and 10.1% (P = 0.73) at the end of the study three years later; this was for hypertension and type 2 diabetes mellitus medication use combined.
Subgroup analyses were undertaken for this outcome due to the presence of substantial heterogeneity. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping by duration of study ( Analysis 1.53 ) as well as age ( Analysis 1.54 ), gender ( Analysis 1.55 ), BMI ( Analysis 1.56 ) or hypertensive status ( Analysis 1.57 ) of participants at baseline suggests there may be no important clinical differences in average effects between subgroups. Subgrouping by ethnicity, type and implementation of LSSS, 24‐h sodium excretion at baseline and 24‐h potassium excretion at baseline could not be used to explore clinical differences, as all studies were categorised into the same subgroup.
Change in BMI (kg/m 2 )
Average change in BMI ranged from a reduction of 1.6 kg/m 2 to no change with LSSS and from a reduction of 1.0 kg/m 2 to no change with regular salt in the four trials that reported this outcome. We did not pool these results into an overall effect estimate due to considerable heterogeneity (I2 = 96%, Tau2 = 0.80, Chi2 = 70.15, df = 3 (P < 0.00001)), but rather presented the individual effect sizes between LSSS and regular salt per study (2060 participants, 4 RCTs, Analysis 1.58 ). Trials reporting on this outcome followed participants for eight weeks, 12 weeks, four months and 18 months.
Change in serum creatinine (μmol/L)
Average change in serum creatinine ranged from a reduction of 0.8 μmol/L to an increase of 2 μmol/L with LSSS and from a reduction of 1.0 μmol/L to an increase of 3.54 μmol/L with regular salt in the three trials that reported this outcome. The mean difference in serum creatinine, on average, between LSSS and regular salt groups was 2.56 μmol/L (95% CI ‐0.59 to 5.71, I 2 = 0%, 616 participants, 3 RCTs, Analysis 1.59 ). Trials reporting on this outcome followed participants for 12 weeks, six months and between one and 1.5 years.
Subgroup analyses were undertaken for this outcome to explore whether there are differences in effects in subgroups based on hyperkalaemia risk. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping participants by risk of hyperkalaemia as per Table 6 suggests there may be no important clinical differences in average effects between participants not at risk, and those at possible risk, of hyperkalaemia ( Analysis 1.60 ).
Microalbuminuria
Two studies reported on this outcome (RR 0.67, 95% CI 0.53 to 0.84, I 2 = 0%, 2382 participants, 2 RCTs, Analysis 1.61 ), with the number of participants in the LSSS group with microalbuminuria ranging from 45 to 64, and participants in the regular salt group with microalbuminuria ranging from 58 to 84 across trials. The two trials reporting this outcome had follow‐up at 18 months and three years.
Subgrouping participants by risk of hyperkalaemia could not be conducted as participants in both trials were at unclear risk of hyperkalaemia as per Table 6 .
Macroalbuminuria
One trial reported on this outcome at 18 months (RR 0.48, 95% CI 0.16 to 1.39, 1903 participants, 1 RCT, Analysis 1.62 ), with five participants in the LSSS group experiencing macroalbuminuria events and 10 participants in the regular salt group experiencing macroalbuminuria events.
Change in urinary albumin‐to‐creatinine ratio (uACR)
One trial reported end values for this outcome at 18 months (MD ‐1.68, 95% CI ‐2.87 to ‐0.49, 1903 participants, 1 RCT, Analysis 1.63 ).
Change in fasting blood glucose (mmol/L)
Two trials reported on this outcome for this comparison. Average increases in fasting blood glucose ranged from 0.1 mmol/L to 0.22 mmol/L with LSSS, while average changes ranged from no change to an increase of 0.10 mmol/L in fasting blood glucose with regular salt, across the two trials. We did not pool these results into an overall effect estimate due to considerable heterogeneity (I2 = 94%, Tau2 = 0.56, Chi2 = 17.06, df = 1 (P < 0.0001)), but rather presented the individual effect sizes between LSSS and regular salt per study (338 participants, 2 RCTs, Analysis 1.64 ). The two trials reporting on the outcome followed up participants for four and six months each.
Change in blood triglycerides (mmol/L)
Five trials reported on change in blood triglycerides for this comparison. Average changes in blood triglycerides ranged from a reduction of 0.7 mmol/L to an increase of 0.15 mmol/L with LSSS and from a reduction of 0.01 mmol/L to an increase of 0.9 mmol/L in blood triglycerides with regular salt across the five trials. The change in blood triglycerides, on average, between LSSS and regular salt groups was ‐0.11 mmol/L (95% CI ‐0.91 to 0.69, I 2 = 81%, 420 participants, 5 RCTs, Analysis 1.65 ). Trials reporting on the outcome followed up participants for five weeks, 12 weeks, four months, six months and nine months each.
Subgroup analyses were undertaken for this outcome due to the presence of substantial heterogeneity. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping by study duration ( Analysis 1.66 ), age ( Analysis 1.67 ), ethnicity ( Analysis 1.68 ), BMI ( Analysis 1.69 ), blood pressure status ( Analysis 1.70 ), type of LSSS ( Analysis 1.72 ), baseline sodium excretion ( Analysis 1.73 ) and baseline potassium excretion ( Analysis 1.74 ) suggests there may be no important differences in average effects between these subgroups. Subgrouping by the method of implementation of LSSS yielded a few studies in each subgroup, resulting in an analysis that was not powered to detect any differences in effect ( Analysis 1.71 ). Subgrouping by gender could not be used to explore clinical differences as all studies were categorised into the same subgroup.
Change in total blood cholesterol (mmol/L)
Six trials reported on change in total blood cholesterol for this comparison. One trial each followed up participants for five weeks, 12 weeks and four months; two trials reporting on this outcome had follow‐up of approximately six months and one trial followed up participants for nine months. Across the trials, average changes in total blood cholesterol ranged from a reduction of 0.7 mmol/L to an increase of 0.16 mmol/L with LSSS and from a reduction of 0.63 mmol/L to an increase of 0.24 mmol/L with regular salt. When LSSS were compared to regular salt, the mean difference in total cholesterol change on average was ‐0.31 mmol/L (95% CI ‐0.74 to 0.12, I 2 = 85%, 509 participants, 6 RCTs, Analysis 1.75 ).
Subgroup analyses were undertaken for this outcome due to the presence of substantial heterogeneity. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping by study duration ( Analysis 1.76 ), age ( Analysis 1.77 ), ethnicity ( Analysis 1.78 ), BMI ( Analysis 1.79 ), blood pressure status ( Analysis 1.80 ), implementation of LSSS ( Analysis 1.81 ), type of LSSS ( Analysis 1.82 ), baseline sodium excretion ( Analysis 1.83 ) and baseline potassium excretion ( Analysis 1.84 ) suggests there may be no important differences in average effects between these subgroups. Subgrouping by gender could not be used to explore clinical differences as all studies were categorised into the same subgroup.
Change in 24‐h urinary sodium excretion (mmol/24‐h)
Eleven trials reported on change in 24‐h urinary sodium excretion for this comparison. Average changes in this outcome ranged from a reduction of 75.5 mmol (1730 mg) sodium/24‐h to an increase of 20.2 mmol (460 mg) sodium/24‐h with LSSS and from a reduction of 31 mmol (710 mg) sodium/24‐h to an increase of 11 mmol (250 mg) sodium/24‐h with regular salt across the trials. We did not pool these results into an overall effect estimate due to considerable heterogeneity (I2 = 91%, Tau2 = 595.13, Chi2 = 107.72, df = 10 (P < 0.00001)), but rather presented the individual effect sizes between LSSS and regular salt per study (3885 participants, 11 RCTs, Analysis 1.85 ). Three trials reporting on this outcome followed up participants for four, five, and eight weeks; five trials followed up participants for three, four, nine, 18 and 60 months each; three trials reported on the outcome at approximately six months.
The stepped‐wedge cluster trial (unclear overall risk of bias) reporting on 605 participants reported little to no difference between a LSSS and regular salt for this outcome ( Analysis 1.86 ).
Change in 24‐h urinary potassium excretion (mmol/24‐h)
Eleven trials reported on change in 24‐h urinary potassium excretion for this comparison. Average changes in this outcome ranged from a reduction of 4.4 mmol (170 mg) potassium/24‐h to an increase of 18.5 mmol (720 mg) potassium/24‐h with LSSS and from a reduction of 16 mmol (630 mg) potassium/24‐h to an increase of 4.6 mmol (180 mg) potassium/24‐h with regular salt across the trials. The meta‐analysis showed a difference, on average, favouring LSSS when compared to regular salt in the effect on 24‐h urinary potassium excretion between LSSS and regular salt groups (MD 11.44 mmol (450 mg) potassium/24‐h, 95% CI 7.62 to 15.26 mmol/24‐h [298 to 597 mg/24‐h], I 2 = 82%, 3885 participants, 11 RCTs, Analysis 1.87 ). Three trials reporting on the outcome followed up participants for four, five, and eight weeks, five trials followed up participants for three, four, nine, 18 and 60 months each; three trials reported on the outcome at approximately six months.
The stepped‐wedge cluster trial (unclear overall risk of bias) reporting on 605 participants found a similar direction of effect, though it was far smaller ( Analysis 1.98 ).
Subgroup analyses were undertaken for this outcome due to the presence of substantial heterogeneity. In line with Cochrane guidance ( Deeks 2020 ) detailing the limitations of subgroup analyses, caution was taken in the interpretation of findings from these subgroup analyses. Subgrouping by study duration ( Analysis 1.88 ), age ( Analysis 1.89 ), gender ( Analysis 1.90 ), ethnicity ( Analysis 1.91 ), BMI ( Analysis 1.92 ), type of LSSS ( Analysis 1.95 ), baseline sodium excretion ( Analysis 1.96 ) and baseline potassium excretion ( Analysis 1.97 ) suggests there may be no important differences in average effects between these subgroups.
Subgrouping by blood pressure status suggested some differences between subgroups ( Analysis 1.93 ), but this was driven mainly by a large subgroup of participants of mixed hypertensive status from one study ( Neal 2021 ). Therefore, the observed difference in effect was not considered to be attributable to hypertensive status. Subgrouping by the method of implementation of LSSS yielded a few studies in two of the three subgroups, resulting in an analysis that was not powered to detect any differences in effect ( Analysis 1.94 ).
Comparison 2. Low‐sodium salt substitutes versus regular salt or no active intervention in children
Table 2 presents the effects of LSSS compared to regular salt in children on changes in DBP and SBP.
A single RCT randomising families as clusters and reporting on 92 children was included in this comparison. Key details about the study in this comparison, including study design, setting and overall risk of bias; characteristics of the intervention, comparator, population and outcomes; method of synthesis, and time points of measurement are included in the Overview of Synthesis and Included Studies (OSIS) table ( Table 9 ).
Primary outcomes
For comparison 2, no studies reported on hypertension, blood pressure control, change in blood potassium, hyperkalaemia or hypokalaemia.
Change in diastolic blood pressure (DBP, mmHg)
GRADE assessment suggests that the evidence is very uncertain about the effect of LSSS on changes in DBP, when compared to regular salt, in children (very low‐certainty evidence, downgraded once risk of bias, once for indirectness and once for imprecision).
The average change in DBP was a reduction of 2.1 mmHg in the group that ate bread containing LSSS and a reduction of 5.87 mmHg in the group that ate bread containing regular salt for the single cluster‐RCT that reported this outcome at four months follow‐up. The mean difference, when comparing these groups, was 1.28 mmHg (95% CI ‐1.56 to 4.12, 92 participants, 1 RCT, very low‐certainty evidence, Analysis 2.1 ).
Change in systolic blood pressure (SBP, mmHg)
GRADE assessment suggests that the evidence is very uncertain about the effect of LSSS on changes in SBP, when compared to regular salt, in children (very low‐certainty evidence, downgraded once risk of bias, once for indirectness and once for imprecision).
The average change in SBP was a reduction of 5.1 mmHg in the group that ate bread containing LSSS and a reduction of 6.05 mmHg in the group that ate bread containing regular salt for the single cluster‐RCT that reported this outcome at four months follow‐up. The mean difference when comparing these groups was 0.12 mmHg (95% CI ‐4.41 to 4.64, 92 participants, 1 RCT, very low‐certainty evidence, Analysis 2.2 ).
Secondary outcomes
For comparison 2, no studies reported on adverse events (other), cardiovascular events, antihypertensive medication use, all‐cause mortality, cardiovascular mortality, bone densitometry measures, renal function, bone health, hyponatraemia, changes in fasting blood glucose, changes in blood triglycerides or changes in total blood cholesterol.
Growth changes (e.g. z‐scores for height‐ or length‐for‐age (HAZ or LAZ), weight‐for‐height (WHZ), weight‐for‐age (WAZ), BMI‐for‐age)
In the trial reporting on BMI changes in children, the unadjusted average reductions in BMI were 1.62 kg/m 2 with bread containing LSSS and 1.50 kg/m 2 with bread containing regular salt. The mean difference in BMI, on average, was 0.94 kg/m 2 (95% CI 0.85 to 1.03, 92 participants, 1 RCT, Analysis 2.3 ) when comparing these groups at four months.
Change in 24‐h urinary sodium excretion (mmol/24‐h)
The average change in 24‐h urinary sodium excretion was an increase of 11.4 mmol (262 mg) sodium/24‐h in the group that ate bread containing LSSS and a reduction of 3.2 mmol (74 mg) sodium/24‐h in the group that ate bread containing regular salt for the single cluster‐RCT that reported this outcome at four months follow‐up. The mean difference when comparing these groups was 14.60 mmol (336 mg) sodium/24‐h (95% CI ‐11.22 to 40.42 mmol/24‐h [‐258 to 929 mg/24‐h], 92 participants, 1 RCT, Analysis 2.4 ).
Change in 24‐h urinary potassium excretion (mmol/24‐h)
The average change in 24‐h urinary potassium excretion was a reduction of 1.6 mmol (64 mg) potassium/24‐h in the group that ate bread containing LSSS and a reduction of 5.7 mmol (223 mg) potassium/24‐h in the group that ate bread containing regular salt for the single cluster‐RCT that reported this outcome at four months follow‐up. The mean difference when comparing these groups was 4.10 mmol (160 mg) potassium/24‐h (95% CI ‐5.13 to 13.33 mmol/24‐h [‐201 to 521 mg/24‐h], 92 participants, 1 RCT, Analysis 2.5 ).
Comparison 3. Low‐sodium salt substitutes versus regular salt or no active intervention in pregnant women
No eligible studies in pregnant women were found. | Discussion
Summary of main results
This review examined the effects and safety of LSSS compared to regular salt or no active intervention on blood pressure and cardiovascular health in adults, children and pregnant women. We included 16 RCTs and ten cluster‐RCTs (n = 26) conducted in adults from a range of different settings, including nursing homes, hospitals, rural and suburban households and communities, as well as rural villages. Importantly, these 26 trials included various clinical subpopulations, with nearly two‐thirds of trials conducted in people with existing hypertension. We also included one cluster‐RCT including healthy children. The proportion of sodium chloride replacement in the LSSS interventions varied from approximately 3% to 77% with 24 trials replacing the sodium chloride with some potassium chloride (more details in the Characteristics of included studies section). We did not find any eligible studies in pregnant women. We also did not find any eligible prospective analytical cohort studies.
Low‐sodium salt substitutes versus regular salt or no active intervention in adults
Adult participants in groups allocated to LSSS had lowered DBP and SBP (range of reductions: 0.6 to 11.33 mmHg and 1.5 to 15.25 mmHg, respectively) on average, while those allocated to regular salt had smaller reductions and more variable results for both DBP (range of change: 7 mmHg reduction to 2.6 mmHg increase) and SBP (range of change: 6.8 mmHg reduction to 4 mmHg increase) on average.
In adult participants, meta‐analysis showed that LSSS probably reduce DBP slightly at up to 60 months. The 95% CIs of the pooled mean differences did not include clinically meaningful benefit (ranging from 1.36 to 3.50 mmHg lower), as we considered changes in DBP of greater than 5 mmHg to be clinically meaningful due to a 'significant' reduction in stroke risk of approximately 60% in high risk individuals ( Thomopoulos 2017 ). However, small mean reductions for an entire population are more beneficial than very large reductions in only those at high risk ( Verbeek 2021 ). Since the review focussed on the population‐level substitution of regular salt with LSSS, we applied a population perspective and derived a simplified population impact estimate (as described in Appendix 2 ) of the reduction in DBP observed in our meta‐analysis. This estimate suggested that the observed reduction in DBP corresponded to an estimated 60 (ranging from 35 to 83) stroke deaths prevented per 100,000 persons aged 50 years and older, per year. The observed small, important mean difference in DBP between LSSS and regular salt was confirmed by sensitivity analyses. Substantial heterogeneity was, however, detected in the pooled analysis for this outcome: subgrouping by various characteristics of included studies, participants and interventions suggests there may be no important clinical differences in average effects between the various subgroups.
In adult participants, meta‐analysis showed that LSSS probably reduce SBP slightly at up to 60 months. The 95% CIs of the pooled mean differences did not include clinically meaningful benefit (ranging from 3.50 to 6.01 mmHg lower), as we considered changes of at least 10 mmHg in SBP to be clinically meaningful. This cut‐off was informed by a systematic review with meta‐regression, quantifying the effects of blood pressure reduction on cardiovascular outcomes and death from large‐scale blood‐pressure lowering trials, which indicated that relative risk reductions are proportional to the magnitude of blood‐pressure reduction, with every 10 mmHg reduction in SBP significantly reducing the risk of major cardiovascular disease events ( Ettehad 2016 ). In addition, the overview and meta‐analysis by Thomopoulos 2017 , investigating the effects of blood‐pressure lowering treatment and stratifying participants by total cardiovascular risk, reported a 'significant' stroke reduction of 60% with a 10 mmHg reduction in SBP in individuals at high risk. Our simplified population impact estimate suggested that the observed reduction in SBP with LSSS compared to regular salt corresponds to an estimated 53 (ranging from 40 to 65) stroke deaths prevented per 100,000 persons aged 50 years and older, per year. The observed small, important mean difference in SBP between LSSS and regular salt was broadly confirmed by sensitivity analyses. We explored the substantial heterogeneity detected in the pooled analysis for SBP using subgroup analyses, which suggest there may be no important clinical differences in average effects between the various subgroups.
For the presence of hypertension at 18 months, one trial showed little to no difference between the effects of LSSS and regular salt in adult participants. A stepped‐wedge cluster trial measuring incident hypertension indicated a more pronounced difference, with the hazard ratio for this outcome favouring LSSS.
We do not know whether there is a difference in the number of adult participants per group who achieve blood pressure control as the 95% CI limits of the pooled effect were consistent with the possibility for unimportant and important benefit, and most of the information for this outcome came from a study at unclear overall risk of bias. Furthermore, this study had limited generalisability as it investigated the effect of LSSS comprising 97% sodium chloride (regular salt) and therefore did not represent the composition of the majority of LSSS formulations on the market, most of which contain 65% or less sodium chloride.
We also do not know whether there is a difference in the number of adult participants per group who experience various cardiovascular events. Only 18 of these events were reported in total, including angina, serious cardiovascular events and cardiovascular symptoms, resulting in very imprecise 95% CI limits around the pooled effect. In addition, the pooled effect was driven by a large study in individuals at high risk of future vascular disease, thereby limiting the generalisability of the findings.
In adult participants, meta‐analysis showed that LSSS probably reduce non‐fatal stroke slightly at up to 60 months. The 95% CIs of the pooled risk and rate ratios included unimportant benefit and unimportant harm or no effect (ranging from a RR of 0.80 to 1.01), as we considered relative measures of less than 0.75 or greater than 1.25 to be important or 'appreciable' ( Guyatt 2011 ). The simplified population impact estimate we derived (as described in Appendix 2 ) suggested that the observed relative risk when LSSS was compared to regular salt corresponds to an estimated 10 non‐fatal strokes prevented (ranging from 21 prevented to 1 caused) per 100,000 persons per year. The observed benefit with LSSS was not reflected in sensitivity analyses; with these instead showing highly imprecise effects of little to no effect, or harm.
Meta‐analysis in adult participants showed that LSSS probably reduce non‐fatal ACS slightly at up to 60 months. The 95% CIs of this effect included important and unimportant benefit (ranging from a rate ratio of 0.52 to 0.94), as we considered relative measures of less than 0.75 or greater than 1.25 to be important. The simplified population impact estimate we derived suggested that the observed relative risk when LSSS was compared to regular salt corresponds to an estimated 50 non‐fatal ACS events prevented (ranging from 10 to 80 prevented) per 100,000 persons per year.
In adult participants, meta‐analysis showed that LSSS probably reduce cardiovascular mortality slightly at up to 60 months. The 95% CIs of the pooled rate ratios included important benefit and no effect (ranging from a rate ratio of 0.60 to 1.00), as we considered relative measures of less than 0.75 or greater than 1.25 to be important. The simplified population impact estimate we derived suggested that the observed relative risk when LSSS was compared to regular salt corresponds to an estimated 53 cardiovascular deaths prevented (ranging from 92 prevented to none caused or prevented) per 100,000 persons per year. The observed relative effect between LSSS and regular salt was confirmed by sensitivity analysis excluding trials at high risk of overall bias.
We do not know whether there is a difference in the number of stroke deaths per group in adults as the 95% CI limits of the pooled effect were consistent with the possibility for important harm and important benefit. Furthermore, the generalisability of the results to the general population was limited as the pooled effect was driven by a large secondary prevention trial in which 73% of included participants had previously had a stroke.
In adult participants, meta‐analysis showed that LSSS probably increase blood potassium slightly at up to one and a half years. The 95% CIs of the pooled mean differences did not include clinically meaningful changes (ranging from 0.07 to 0.18 mmol/L higher), as we considered changes in blood potassium of greater than 1.0 mmol/L to be clinically meaningful. This is based on the variations around the 'normal' blood potassium levels of 3.6 to 5.0 mmol/L ( Cohn 2000 ), which can cause moderate hyperkalaemia (defined as 6.0 to 6.9 mmol/L; Hollander‐Rodriguez 2006 and Ahee 2000 , respectively).
For the presence of hyperkalaemia at up to 60 months, some trials reported no events while others reported hyperkalaemic events in both the LSSS and regular salt groups. In adult participants, meta‐analyses showed that LSSS likely result in little to no difference in hyperkalaemia. The 95% CIs of the pooled risk ratios included important benefit and important harm (ranging from a RR of 0.46 to 2.38), as we considered relative measures of less than 0.75 or greater than 1.25 to be important. Only five trials reported on this outcome, though all information included in the meta‐analysis came from two trials in participants judged to be at possible and unclear risk of hyperkalaemia.
We do not know whether there is a difference in the number of adult participants per group who experience hypokalaemia events as one small trial reporting on this outcome reported zero events in both groups. In addition, this trial was at unclear overall risk of bias, and included only younger participants with hypertension treated with potassium‐sparing diuretics. Consequently, as the rationale for the administration of LSSS was potassium supplementation, we considered the generalisability of the evidence to be limited.
We also do not know whether there is a difference in the number of adult participants per group who experience other adverse events. Most of the information for this outcome was from studies at high or unclear overall risk of bias, events were very sparsely reported (39 in total), and outcomes were too diverse to pool.
Low‐sodium salt substitutes versus regular salt or no active intervention in children
Children (mean age 9.5 (SD 4.2) years) allocated to bread containing LSSS had lowered DBP and SBP (reduction of 2.1 mmHg and 5.1 mmHg, respectively) on average. Children (mean age 8.4 (SD 3.5) years) allocated to bread containing regular salt had larger reductions for both DBP and SBP (reduction of 5.87 mmHg and 6.05 mmHg, respectively) on average.
In all participants aged 18 or younger, results from a single included trial showed that the evidence is very uncertain about the effect of LSSS compared to regular salt on change in DBP as well as SBP at four months. The trial contributing to these outcomes was at unclear overall risk of bias, and reported effects with wide 95% CIs including both reductions and increases in DBP and SBP. Furthermore, the intervention was delivered in bread only, thereby limiting generalisability to discretionary use settings.
Overall completeness and applicability of evidence
Our review made use of a comprehensive search strategy with no language or date restrictions to identify all RCTs and prospective analytical cohort studies assessing the effect of LSSS on cardiovascular health in adults, children and pregnant women in the general population. We searched multiple sources of information for all studies and handsearched three relevant systematic reviews to identify additional studies. We also contacted study authors in cases where we required additional data or information.
We found only one trial, included under Comparison 1 for the effect of LSSS on adults, which additionally reported certain outcomes in children. The sparse evidence in children may be due to the relatively low prevalence of elevated blood pressure in children, with two large systematic reviews conducted in low‐ and middle‐income settings reporting pooled prevalence of 5.5% and 9.8% in children and adolescents from Africa and China, respectively ( Noubiap 2017 ; Wang 2019 ). A large systematic review and meta‐regression conducted in 122,000 adolescents further indicated that this prevalence was disproportionately affecting adolescents in low‐ and middle‐income countries ( De Moraes 2014 ). Consequently, maximal population‐level effects would not be achieved through targeting children, but rather adults; a group in which the global prevalence of hypertension (defined as a SBP ≥ 140 mmHg and DBP ≥ 90 mmHg) was approximately 32.5% for adults aged 30 years and older in 2019 ( NCD Risk Factor Collaboration 2021 ).
We found no studies assessing the effect of LSSS in pregnant women. While chronic hypertension is a known significant risk factor for pre‐eclampsia ( Bilano 2014 ), mean arterial pressure (MAP) in the first and second trimester has been suggested as a better predictor of pre‐eclampsia than SBP and DBP ( Cnossen 2008 ). This study additionally reported that high MAP before pregnancy can be used as a predictor of pre‐eclampsia ( Cnossen 2008 ) suggesting that the timing of the management of blood pressure is important, and that interventions to lower blood pressure prior to pregnancy might have the most success in avoiding complications for the mother and infant. In addition, pre‐eclampsia is a complex disease involving multiple organ systems ( Palei 2013 ) and risk factors ( Bilano 2014 ), and there is a paucity of evidence to support lifestyle interventions, such as reducing dietary sodium intake, for preventing pre‐eclampsia ( Thangaratinam 2011 ).
Though our review included a reasonable distribution of studies from low‐ and middle‐income (n = 15) and high‐income countries (n = 12), the majority of trials (n = 15) were conducted in Asian populations. Furthermore, only one included study was conducted in South America; no eligible studies conducted in Africa, Oceania or North America were found. As different populations may use different quantities of discretionary salt, as a proportion of total salt intake, this may have an impact on the degree to which substitution with discretionary LSSS will alter sodium and potassium intakes. Subgroup analyses suggest there may be no important clinical differences in average effects on blood pressure between subgroups by ethnicity, although these analyses were limited. This suggested that the mix of ethnicities included in the review may not systematically bias our pooled estimates to Asian populations. It is more difficult to judge whether evidence from other countries and regions not represented in the review may have changed our pooled estimates. Such potential systematic differences could likely be categorised as biological and behavioural; such categories might plausibly include differences in baseline prevalence, and extent, of hypertension and differences in adherence to LSSS, respectively. Our review included populations with diverse baseline risks of hypertension and diverse baseline 24‐h urinary excretion of sodium and potassium, but subgroup analyses of these factors suggest there may be no important clinical differences in average effects on blood pressure.
Importantly, the findings of subgroup analyses should always be interpreted with caution as these can often be misleading ( Deeks 2020 ). The likelihood of false negative and positive results increase rapidly when numerous subgroup analyses are undertaken and statistical power to find significant differences between subgroups is often lacking ( Cuijpers 2021 ; Deeks 2020 ). In our review in particular, subgroup analyses were often limited by very few studies or participants contributing information to certain subgroups. As a result, findings from the subgroup analyses in our review may not all be sufficiently robust and should be interpreted with caution and with consideration of the described limitations.
Most of the trials included in our review assessed the effects of LSSS in participants with elevated blood pressure at enrolment. Due to limited data, we could not adequately examine effect modification for the relationship between LSSS use and outcomes by hypertension status.
All trials included in the review specifically excluded participants in whom it is known that an increased intake of potassium could cause harm, for example, people with CKD, type 1 or 2 diabetes mellitus, impaired renal function or those using potassium‐sparing medications. This limits the generalisability of our findings regarding the effects and safety of LSSS to these subpopulations, as well as to settings where a considerable proportion of the population may have undiagnosed conditions rendering increased potassium intake as potentially harmful.
Furthermore, the majority of included trials investigated the implementation of LSSS as a discretionary intervention. This limits the generalisability of our findings to non‐discretionary applications of LSSS, particularly use in condiments, and in manufactured food products or foods sold in restaurants, markets, cafeterias and street vendors.
We did not find evidence for a number of prespecified outcomes in the review. Across comparisons, no studies reported on diabetes mellitus (DM) diagnosis or hyponatraemia. The absence of evidence on hyponatraemia represents a gap in the evidence related to the safety of LSSS. While global sodium intakes are approximately double the WHO recommendation at present, thereby lowering the overall likelihood of hyponatraemia events, the individual risk of this event remains ‐ particularly in older people and those using thiazide diuretics ( Filippatos 2017 ; Upadhyay 2009 ).
In Comparison 2, a single study reporting on the effects of LSSS in children did not report on hypertension, blood pressure control, hyper‐ or hypokalaemia, changes in blood potassium or adverse events. The paucity of evidence for these outcomes in children is likely due to the low general prevalence of hypertension as well as conditions and risk factors related to blood potassium imbalances.
Quality of the evidence
The interpretation of many of the trials included in the review is constrained by small sample sizes and considerable loss to follow‐up. Lack of baseline exposure and dietary status, as well as adherence to the allocated intervention, have been identified as factors that undermine the translation of dietary clinical trials into practice ( Mirmiran 2021 ); these were generally sparsely and diversely reported by the included trials in the review. Nine trials included in the review were judged as having low risk of bias overall; 12 were at unclear overall risk of bias.
The pooled estimates of the effect of LSSS interventions compared to regular salt on hypertension, blood pressure control, and stroke mortality were downgraded for imprecision in line with the minimally contextualised approach we used. Imprecision was also identified for the outcomes: various cardiovascular events and other adverse events. These were composite outcomes and studies reporting on them were not designed or powered to detect differences between LSSS and regular salt groups.
The evidence for blood pressure control was considered indirect because it is questionable whether the intervention for the study contributing the most data was sufficiently lower in sodium compared to regular salt (only 3% replacement).
The evidence for hypokalaemia was downgraded for indirectness since the participants included in the single study reporting on this outcome were not sufficiently generalisable, being younger hypertensive adults on potassium‐depleting diuretics.
Indirectness was also identified for all outcomes relating to cardiovascular events and mortality. This was due to most, or all, of the information for these outcomes coming from studies including participants at high risk of cardiovascular disease, or individuals who had already experienced a cardiovascular event; thereby limiting the generalisability of the findings to the general population.
Furthermore, the certainty in pooled estimates for changes in blood pressure was affected by unexplained substantial heterogeneity. This may be due to clinical heterogeneity related to the various ways in which blood pressure measurements are collected in practice; various studies have shown variability between single measures and the mean of consecutive measurements ( Burkard 2018 ), between blood pressure measured in a clinical setting and those obtained from daytime ambulatory measurements ( Banegas 2017 ), and between measurements obtained from aneroid (inflatable cuff) and electronic sphygmomanometers ( Shahbabu 2016 ).
Potential biases in the review process
The review may be affected by non‐reporting bias. For the unknowns that we are aware of ('known unknowns'), i.e. particular results from a trial not reported in a usable format, we contacted trial authors to request the data in a usable format. In cases where we did not obtain these data and they were consequently excluded, this is a limitation, as we cannot be certain how the inclusion of these data would have affected the pooled estimates. Despite this, we do observe agreements in the pooled mean differences in blood pressure changes reported in other similar reviews on this topic; though the estimated reductions are typically more conservative in our review. As a result, we think it is unlikely that these missing data would have meaningfully changed our pooled estimates. A number of studies ( Li 2014 ; Li 2016 ; Mu 2009 ; Zhou 2013 ) reported allocating entire villages or households to LSSS or regular salt without explicit exclusion of children, though only one study reported separate data for the effect of the intervention in children. It is possible that data from family members aged 18 years or younger who were included as part of a randomised village or household may have changed our pooled estimates for children, though we anticipate that these data would be from a small subset of participants included in these trials. We are unsure whether these missing data would have meaningfully changed our pooled estimates of effect in children.
It is more difficult for us to judge the effect of 'unknown unknowns', i.e. entire eligible studies not detected by our comprehensive search strategy. We acknowledge the possibility that small, unpublished studies may not have been identified and included in the review as the interpretation of funnel plot asymmetry could not definitively rule this out. This was due to similar effect sizes despite varying inter‐trial standard errors for DBP and similar effect sizes as well as standard errors between trials for SBP.
We did not exclude any studies based on the duration of intervention, the formulation of LSSS, or participant characteristics. We did, however, exclude studies with multifactorial designs where the effect of LSSS could not be isolated, though it may have been relevant to the review question. The reason for excluding studies with such multi‐component interventions was that any observed changes in outcomes of interest could not be attributed to LSSS alone.
Agreements and disagreements with other studies or reviews
Pooled mean differences in blood pressure in our review are in line with previous systematic reviews on the effects of LSSS use in adults. These recent systematic reviews reported reductions in DBP and SBP ranging from 2.00 to 4.04 mmHg and 7.81 to 8.87 mmHg, respectively ( Hernandez 2019 ; Jafarnejad 2020 ; Jin 2020 ).
However, it is important to take note of differences between these reviews and our review. One of the reviews included only studies conducted in Chinese study participants ( Jin 2020 ), while another restricted studies to participants with stage 2 hypertension ( Jafarnejad 2020 ). Both of these can be considered limited in their ability to generalise to guidelines for the general population: the review by Jin 2020 , through its restriction to an ethno‐geographic group with, according to a recent publication from the China Hypertension Survey, a high prevalence of hypertension ( Wang 2018 ); the review by Jafarnejad 2020 , for including only studies in participants with progressive disease ( Giles 2009 ). The inclusion criteria for these reviews consequently resulted in enriched populations in respect of hypertension, which may have influenced treatment effect through an increased number of participants with resistant hypertension ( Yaxley 2015 ). Jafarnejad 2020 reported stratified analyses by several effect modifiers, demonstrating that LSSS use resulted in reductions in SBP and DBP in hypertensive adults of all ages, though reductions in SBP and DBP were slightly more pronounced in hypertensive adults younger than 65 years. Though these results were numerically very similar in the subgroup analyses conducted as part of our review, they were not fully reflected since we found greater reductions in SBP and smaller reductions in DBP in younger participants compared to older participants in the general population. As discussed previously, these differences may be due to differences in the included populations. A large review investigating the effect of antihypertensive medication (therefore including participants with overt hypertension) reported larger reductions in DBP in younger participants when compared to older and very old participants ( Guang Wang 2005 ).
Only one review evaluated the certainty of evidence and presented low‐certainty evidence for reductions of 3.96 mmHg and 7.81 mmHg, in DBP and SBP respectively, at any length of follow‐up ( Hernandez 2019 ). Low‐certainty evidence was also presented for the effect on triglycerides. For other outcomes, Hernandez 2019 concluded, on the basis of moderate‐certainty evidence, that LSSS use probably has little or no effect on mortality, while its effects on detected hypertension, total blood cholesterol, glucose, as well as urinary sodium and potassium excretion were very uncertain. The pooled effect estimates of these outcomes were similar between the review by Hernandez 2019 and our review, despite slight differences in included studies and the exact definitions of outcomes. | Authors' conclusions
| Abstract
Background
Elevated blood pressure, or hypertension, is the leading cause of preventable deaths globally. Diets high in sodium (predominantly sodium chloride) and low in potassium contribute to elevated blood pressure. The WHO recommends decreasing mean population sodium intake through effective and safe strategies to reduce hypertension and its associated disease burden. Incorporating low‐sodium salt substitutes (LSSS) into population strategies has increasingly been recognised as a possible sodium reduction strategy, particularly in populations where a substantial proportion of overall sodium intake comes from discretionary salt. The LSSS contain lower concentrations of sodium through its displacement with potassium predominantly, or other minerals. Potassium‐containing LSSS can potentially simultaneously decrease sodium intake and increase potassium intake. Benefits of LSSS include their potential blood pressure‐lowering effect and relatively low cost. However, there are concerns about potential adverse effects of LSSS, such as hyperkalaemia, particularly in people at risk, for example, those with chronic kidney disease (CKD) or taking medications that impair potassium excretion.
Objectives
To assess the effects and safety of replacing salt with LSSS to reduce sodium intake on cardiovascular health in adults, pregnant women and children.
Search methods
We searched MEDLINE (PubMed), Embase (Ovid), Cochrane Central Register of Controlled Trials (CENTRAL), Web of Science Core Collection (Clarivate Analytics), Cumulative Index to Nursing and Allied Health Literature (CINAHL, EBSCOhost), ClinicalTrials.gov and WHO International Clinical Trials Registry Platform (ICTRP) up to 18 August 2021, and screened reference lists of included trials and relevant systematic reviews. No language or publication restrictions were applied.
Selection criteria
We included randomised controlled trials (RCTs) and prospective analytical cohort studies in participants of any age in the general population, from any setting in any country. This included participants with non‐communicable diseases and those taking medications that impair potassium excretion. Studies had to compare any type and method of implementation of LSSS with the use of regular salt, or no active intervention, at an individual, household or community level, for any duration.
Data collection and analysis
Two review authors independently screened titles, abstracts and full‐text articles to determine eligibility; and extracted data, assessed risk of bias (RoB) using the Cochrane RoB tool, and assessed the certainty of the evidence using GRADE. We stratified analyses by adults, children (≤ 18 years) and pregnant women. Primary effectiveness outcomes were change in diastolic and systolic blood pressure (DBP and SBP), hypertension and blood pressure control; cardiovascular events and cardiovascular mortality were additionally assessed as primary effectiveness outcomes in adults. Primary safety outcomes were change in blood potassium, hyperkalaemia and hypokalaemia.
Main results
We included 26 RCTs, 16 randomising individual participants and 10 randomising clusters (families, households or villages). A total of 34,961 adult participants and 92 children were randomised to either LSSS or regular salt, with the smallest trial including 10 and the largest including 20,995 participants. No studies in pregnant women were identified. Studies included only participants with hypertension (11/26), normal blood pressure (1/26), pre‐hypertension (1/26), or participants with and without hypertension (11/26). This was unknown in the remaining studies. The largest study included only participants with an elevated risk of stroke at baseline. Seven studies included adult participants possibly at risk of hyperkalaemia. All 26 trials specifically excluded participants in whom an increased potassium intake is known to be potentially harmful. The majority of trials were conducted in rural or suburban settings, with more than half (14/26) conducted in low‐ and middle‐income countries.
The proportion of sodium chloride replacement in the LSSS interventions varied from approximately 3% to 77%. The majority of trials (23/26) investigated LSSS where potassium‐containing salts were used to substitute sodium. In most trials, LSSS implementation was discretionary (22/26). Trial duration ranged from two months to nearly five years.
We assessed the overall risk of bias as high in six trials and unclear in 12 trials.
LSSS compared to regular salt in adults: LSSS compared to regular salt probably reduce DBP on average (mean difference (MD) ‐2.43 mmHg, 95% confidence interval (CI) ‐3.50 to ‐1.36; 20,830 participants, 19 RCTs, moderate‐certainty evidence) and SBP (MD ‐4.76 mmHg, 95% CI ‐6.01 to ‐3.50; 21,414 participants, 20 RCTs, moderate‐certainty evidence) slightly.
On average, LSSS probably reduce non‐fatal stroke (absolute effect (AE) 20 fewer/100,000 person‐years, 95% CI ‐40 to 2; 21,250 participants, 3 RCTs, moderate‐certainty evidence), non‐fatal acute coronary syndrome (AE 150 fewer/100,000 person‐years, 95% CI ‐250 to ‐30; 20,995 participants, 1 RCT, moderate‐certainty evidence) and cardiovascular mortality (AE 180 fewer/100,000 person‐years, 95% CI ‐310 to 0; 23,200 participants, 3 RCTs, moderate‐certainty evidence) slightly, and probably increase blood potassium slightly (MD 0.12 mmol/L, 95% CI 0.07 to 0.18; 784 participants, 6 RCTs, moderate‐certainty evidence), compared to regular salt.
LSSS may result in little to no difference, on average, in hypertension (AE 17 fewer/1000, 95% CI ‐58 to 17; 2566 participants, 1 RCT, low‐certainty evidence) and hyperkalaemia (AE 4 more/100,000, 95% CI ‐47 to 121; 22,849 participants, 5 RCTs, moderate‐certainty evidence) compared to regular salt. The evidence is very uncertain about the effects of LSSS on blood pressure control, various cardiovascular events, stroke mortality, hypokalaemia, and other adverse events (very‐low certainty evidence).
LSSS compared to regular salt in children: The evidence is very uncertain about the effects of LSSS on DBP and SBP in children. We found no evidence about the effects of LSSS on hypertension, blood pressure control, blood potassium, hyperkalaemia and hypokalaemia in children.
Authors' conclusions
When compared to regular salt, LSSS probably reduce blood pressure, non‐fatal cardiovascular events and cardiovascular mortality slightly in adults. However, LSSS also probably increase blood potassium slightly in adults. These small effects may be important when LSSS interventions are implemented at the population level. Evidence is limited for adults without elevated blood pressure, and there is a lack of evidence in pregnant women and people in whom an increased potassium intake is known to be potentially harmful, limiting conclusions on the safety of LSSS in the general population. We also cannot draw firm conclusions about effects of non‐discretionary LSSS implementations. The evidence is very uncertain about the effects of LSSS on blood pressure in children.
Plain language summary
Does using low‐sodium salt substitutes (LSSS) instead of regular salt reduce blood pressure and heart disease risks, and is it safe?
Key messages
• In adults, using LSSS instead of regular salt in food probably lowers blood pressure slightly. Adults using LSSS instead of regular salt probably have a slightly lower risk of non‐fatal heart conditions, such as stroke or a sudden reduced blood flow to the heart, and death from heart disease.
• Using LSSS instead of regular salt probably also slightly increases the level of blood potassium (a mineral that keeps your heart beating at the right pace) in adults. This could be harmful for people who cannot effectively regulate the potassium in their bodies. Other evidence on safety is very limited.
• We are not certain about effects of using LSSS instead of regular salt on blood pressure in children, or whether using LSSS is safe in children.
• This evidence may not directly apply to people known to be at risk of high blood potassium, such as people with kidney problems or on certain medications.
What are low‐sodium salt substitutes (LSSS)?
LSSS are products with less sodium than regular salt. Amounts of sodium in LSSS are lowered by replacing some of the sodium with potassium or other minerals. LSSS may help lower risks of using regular salt, since eating lots of sodium and not enough potassium contributes to high blood pressure. Globally, high blood pressure is the largest cause of preventable deaths, mainly because it causes stroke, acute coronary syndrome (ACS; where less blood flows to the heart), and kidney problems.
However, LSSS also has potential health risks. Using LSSS may lead to higher than normal blood potassium (hyperkalaemia), which causes problems with the heartbeat speed and rhythm, or can cause the heart to stop. These risks are greater in certain people, for example, those whose kidneys do not work properly to remove potassium.
What did we want to find out?
We wanted to find out what the effects of using LSSS instead of regular salt are on blood pressure as well as on events (stroke and ACS) and heart disease death. We also wanted to know if using LSSS instead of regular salt is safe, both in the general population and in people who are known to be at risk of high blood potassium levels.
We wanted to find this out for adults, children and pregnant women.
What did we do?
We searched five electronic databases and trial registries for studies that compared using LSSS with using regular salt. We compared and summarised the results of the studies and rated our confidence in the combined evidence, based on factors such as study methods and sizes.
What did we find?
We found 26 trials* involving 34,961 adults and 92 children. No studies in pregnant women were found. Most trials were undertaken in rural or suburban areas, with more than half done in low‐ and middle‐income countries. Most trials included some people with high blood pressure (22); the largest included only people with a high risk of stroke. Seven trials were done in people at possible risk of high blood potassium. All trials excluded people where high potassium intake is known to be harmful, such as people with kidney problems or on certain medications. Nearly all trials (23) examined LSSS types where some sodium was replaced with potassium. The amount of sodium replaced in the various LSSS used in the trials ranged from very small (3%) to large (77%).
*Trials are types of studies in which participants are assigned randomly to two or more treatment groups. This is the best way to ensure similar groups of participants.
Main results
In adults, LSSS probably lowers blood pressure (diastolic and systolic) slightly when compared to regular salt. Using LSSS also probably lowers risk of non‐fatal stroke, non‐fatal ACS and heart disease death slightly when compared to regular salt.
However, using LSSS instead of regular salt probably also slightly increases the level of potassium in the blood.
Compared to regular salt, LSSS may result in little to no difference in high blood pressure and hyperkalaemia.
We could not draw any conclusions about effects of LSSS on blood pressure control, various heart disease events, death caused by stroke, lower than normal blood potassium (hypokalaemia), and other adverse events.
We could not draw any conclusions about the effects or safety of using LSSS instead of regular salt in children.
What are the limitations of the evidence?
We are moderately confident in the evidence. Our confidence was lowered mainly because of concerns about how some trials were conducted, and whether the results apply to the general population. We are not sure about the effects and safety of LSSS in children, pregnant women, people known to have a risk of high blood potassium, or those who do not have high blood pressure. We are also unsure about the effects of LSSS when used in foods not prepared at home. Further research may change these results.
How up to date is this evidence?
The evidence is up‐to‐date to August 2021.
New | Summary of findings
Objectives
To assess the effects and safety of replacing salt with LSSS to reduce sodium intake on cardiovascular health in adults, pregnant women and children. | Acknowledgements
The World Health Organization (WHO) provided funding to Stellenbosch University towards the cost of carrying out this systematic review. AB, MV, AS and CN are partly supported by the Research, Evidence and Development Initiative (READ‐It). READ‐It (project number 300342‐104) is funded by UK aid from the UK government; however, the views expressed do not necessarily reflect the UK government’s official policies.
This Cochrane Review is associated with the Research, Evidence and Development Initiative (READ‐It). READ‐It (project number 300342‐104) is funded by UK aid from the UK government; however, the views expressed do not necessarily reflect the UK government’s official policies .
We thank the following people:
Vittoria Lutje, Information Specialist for Cochrane Infectious Diseases Group, for the Embase searches. Robin Featherstone, Information Specialist for Cochrane, for guidance in terms of the search strategy. Ms Nina Robertson, for assistance with screening and data extraction. Ms Yuan Chi, Information Specialist for Cochrane Campbell Global Ageing Partnership for Chinese translation, screening and data extraction of papers in Chinese. Dr Marty Chaplin, Senior Research Associate, Liverpool School of Tropical Medicine, UK and Statistical Editor, Cochrane Infectious Diseases for guidance related to the stepped‐wedge cluster‐randomised controlled trial. Prof Alfred Musekiwa, Associate Professor in Biostatistics, University of Pretoria, South Africa for providing guidance on meta‐analytic approach for rate ratios and conversion of incidence rates to rate ratios. Prof Razeen Davids, Head of the Division of Nephrology, Department of Medicine, Faculty of Medicine and Health Sciences, Stellenbosch University and Tygerberg Hospital, South Africa, for providing guidance on criteria for the assessment of hyperkalaemia risk and the time point ranges relevant to the review outcomes. Members of the WHO Nutrition Guidance Expert Advisory Group (NUGAG) Subgroup on Diet and Health for inputs on this review in line with the WHO guideline development process.
Cochrane Public Health supported the authors in the development of this review. The following people conducted the editorial process for this review:
Sign‐off Editor (final editorial decision): Julia L Finkelstein, Associate Professor, Division of Nutritional Sciences, and Deputy Director, Affiliate Cochrane Center for Nutrition, Cornell University Managing Editors (selected peer reviewers, collated peer‐reviewer comments, provided editorial guidance to authors, edited the article): Colleen Ovelman and Joey Kwong, Cochrane Central Editorial Service Editorial Assistant (conducted editorial policy checks and supported editorial team): Lisa Wydrzynski, Cochrane Central Editorial Service Copy Editor (copy‐editing and production): Anne Lethaby, c/o Cochrane Production Service Peer‐reviewers (provided comments and recommended an editorial decision): Rachael McLean, Department of Preventive and Social Medicine, Dunedin School of Medicine, University of Otago, New Zealand (clinical review); Jessica Rigutto‐Farebrother, Human Nutrition Laboratory, Institute of Food, Nutrition and Health, ETH Zürich, Switzerland (clinical review); Robert Walton, Cochrane UK (summary versions review); Rachel Richardson, Cochrane Evidence Production & Methods Directorate (methods review); Jo Platt, Cochrane GNOC (search review). One additional peer reviewer provided clinical peer review but chose not to be publicly acknowledged.
Appendices
Appendix 1. Appendix 1. Search strategies
MEDLINE (PubMed)
Searched: 1946 to 18 August 2021
#1 ((((((((randomized controlled trial [pt]) OR controlled clinical trial [pt]) OR randomized [tiab]) OR placebo [tiab]) OR clinical trials as topic [mesh: noexp]) OR randomly [tiab]) OR trial [ti])) NOT ((animals [mh] NOT humans [mh]))
#2 "cohort study"[Title/Abstract] OR epidemiologic*[Title/Abstract] OR longitudinal[Title/Abstract] OR "Follow‐up study"[Title/Abstract] OR "Follow up study"[Title/Abstract] OR prospective[Title/Abstract] OR "Observational study"[Title/Abstract] NOT ((animals [mh] NOT humans [mh]))
#3 "adverse effects"[MeSH Subheading] OR "complications"[MeSH Subheading] OR "deficiency"[MeSH Subheading] OR "safe"[Title/Abstract] OR "safety"[Title/Abstract] OR "side effect"[Title/Abstract] OR "side effects"[Title/Abstract] OR "undesirable effect"[Title/Abstract] OR "undesirable effects"[Title/Abstract] OR "treatment emergent"[Title/Abstract] OR "tolerability"[Title/Abstract] OR "toxicity"[Title/Abstract] OR "ADRS"[Title/Abstract] OR ("adverse"[Title/Abstract] AND ("effect"[Title/Abstract] OR "effects"[Title/Abstract] OR "reaction"[Title/Abstract] OR "reactions"[Title/Abstract] OR "event"[Title/Abstract] OR "events"[Title/Abstract] OR "outcome"[Title/Abstract] OR "outcomes"[Title/Abstract]))
#4 #1 OR #2 OR #3
#5 "salt substitute"[Title/Abstract] OR "salt substitutes"[Title/Abstract] OR "salt substitution"[Title/Abstract] OR "sodium substitute"[Title/Abstract] OR "sodium substitutes"[Title/Abstract] OR "sodium substitution"[Title/Abstract] OR "substituting sodium"[Title/Abstract] OR "sodium chloride substitute"[Title/Abstract] OR "sodium chloride substitutes"[Title/Abstract] OR "sodium chloride substitution"[Title/Abstract] OR "salt alternative"[Title/Abstract] OR "salt alternatives"[Title/Abstract] OR "sodium alternative"[Title/Abstract] OR "low sodium salt"[Title/Abstract] OR "mineral salt"[Title/Abstract] OR "KCl salt"[Title/Abstract] OR "potassium chloride salt"[Title/Abstract] OR "potassium enriched salt"[Title/Abstract] OR "Potassium lactate"[Title/Abstract] OR "magnesium‐enriched salt"[Title/Abstract] OR "magnesium‐enriched salt"[Title/Abstract] OR "sodium replacement"[Title/Abstract] OR "salt replacement"[Title/Abstract] OR "salt replacer"[Title/Abstract] OR "salt replacers"[Title/Abstract] OR "sodium chloride replacement"[Title/Abstract] OR "sodium chloride replacer"[Title/Abstract]
#6 "Diet, Sodium‐Restricted"[Mesh]
#7 #5 OR #6
#8 #4 AND #7
Embase (Ovid)
Searched: 1947 to 18 August 2021
1 ((salt or sodium) adj1 (substitut* or alternative or replace*)).tw.
2 sodium chloride substitut*.tw.
3 sodium chloride alternative.tw.
4 low* sodium salt.tw.
5 ("mineral salt" or "KCl salt").tw.
6 ("potassium chloride salt" or "potassium enriched salt" or "Potassium lactate").tw.
7 (substitut* or alternative).tw.
8 *sodium chloride/
9 7 and 8
10 *magnesium salt/
11 (magnesium chloride salt or magnesium enriched salt).tw.
12 reduced sodium salt.tw.
13 *sodium restriction/
14 1 or 2 or 3 or 4 or 5 or 6 or 9 or 10 or 11 or 12 or 13
15 (random* or factorial* or crossover* or cross over or placebo* or (doubl* adj blind*) or (singl* adj blind*) or assign* or allocat* or volunteer*).tw.
16 crossover procedure/ or double blind procedure/
17 randomized controlled trial/ or single blind procedure/
18 (rat or rats or mouse or mice or swine or porcine or murine or sheep or lambs or pigs or piglets or rabbit or rabbits or cat or cats or dog or dogs or cattle or bovine or monkey or monkeys or trout or marmoset*).ti. and animal experiment/
19 Animal experiment/ not (human experiment/ or human/)
20 18 or 19
21 15 or 16 or 17
22 21 not 20
23 14 and 22
24 (ae or to).fs.
25 (safe or safety or side effect* or undesirable effect* or treatment emergent or tolerability or toxicity or adrs or (adverse adj2 (effect or effects or reaction or reactions or event or events or outcome or outcomes))).ti,ab.
26 24 or 25
27 26 not 20
28 14 and 27
29 23 or 28
30 limit 29 to (conference abstract or conference paper or "conference review")
31 29 not 30
32 limit 30 to yr="2019 ‐Current"
33 31 or 32
CENTRAL, Cochrane Library
Searched: Issue 8 of 12, 2021
#1 MeSH descriptor: [Diet, Sodium‐Restricted] explode all trees
#2 ("salt substitute" OR "salt substitutes" OR "salt substitution" OR "sodium substitute" OR "sodium substitutes" OR "sodium substitution" OR "substituting sodium" OR "sodium chloride substitute" OR "sodium chloride substitutes" OR "sodium chloride substitution" OR "salt alternative" OR "salt alternatives" OR "sodium alternative" OR "low sodium salt" OR "mineral salt" OR "KCl salt" OR "potassium chloride salt" OR "potassium enriched salt" OR "Potassium lactate" OR "magnesium‐enriched salt" OR "magnesium enriched salt" OR "sodium replacement" OR "salt replacement" OR "salt replacer" OR "salt replacers" OR "sodium chloride replacement" OR "sodium chloride replacer"):ti,ab,kw
#3 #1 OR #2 in Trials
Web of Science Core Collection with Indexes SCI‐Expanded, SSCI, CPCI‐S (Clarivate Analytics)
Searched: 1970 to 18 August 2021
#1 TI=("salt substitute" OR "salt substitutes" OR "salt substitution" OR "sodium substitute" OR "sodium substitutes" OR "sodium substitution" OR "substituting sodium" OR "sodium chloride substitute" OR "sodium chloride substitutes" OR "sodium chloride substitution" OR "salt alternative" OR "salt alternatives" OR "sodium alternative" OR "low sodium salt" OR "mineral salt" OR "KCl salt" OR "potassium chloride salt" OR "potassium enriched salt" OR "Potassium lactate" OR "magnesium‐enriched salt" OR "magnesium‐enriched salt" OR "sodium replacement" OR "salt replacement" OR "salt replacer" OR "salt replacers" OR "sodium chloride replacement" OR "sodium chloride replacer")
#2 AB=("salt substitute" OR "salt substitutes" OR "salt substitution" OR "sodium substitute" OR "sodium substitutes" OR "sodium substitution" OR "substituting sodium" OR "sodium chloride substitute" OR "sodium chloride substitutes" OR "sodium chloride substitution" OR "salt alternative" OR "salt alternatives" OR "sodium alternative" OR "low sodium salt" OR "mineral salt" OR "KCl salt" OR "potassium chloride salt" OR "potassium enriched salt" OR "Potassium lactate" OR "magnesium‐enriched salt" OR "magnesium‐enriched salt" OR "sodium replacement" OR "salt replacement" OR "salt replacer" OR "salt replacers" OR "sodium chloride replacement" OR "sodium chloride replacer")
#3 #1 OR #2
Cumulative Index to Nursing and Allied Health Literature (CINAHL) (EBSCOhost)
Searched: 1937 to 18 August 2021
S1 MW diet, sodium restricted
S2 TI "salt substitute" OR "salt substitutes" OR "salt substitution" OR "sodium substitute" OR "sodium substitutes" OR "sodium substitution" OR "substituting sodium" OR "sodium chloride substitute" OR "sodium chloride substitutes" OR "sodium chloride substitution" OR "salt alternative" OR "salt alternatives" OR "sodium alternative" OR "low sodium salt" OR "mineral salt" OR "KCl salt" OR "potassium chloride salt" OR "potassium enriched salt" OR "Potassium lactate" OR "magnesium‐enriched salt" OR "magnesium‐enriched salt" OR "sodium replacement" OR "salt replacement" OR "salt replacer" OR "salt replacers" OR "sodium chloride replacement" OR "sodium chloride replacer"
S3 AB "salt substitute" OR "salt substitutes" OR "salt substitution" OR "sodium substitute" OR "sodium substitutes" OR "sodium substitution" OR "substituting sodium" OR "sodium chloride substitute" OR "sodium chloride substitutes" OR "sodium chloride substitution" OR "salt alternative" OR "salt alternatives" OR "sodium alternative" OR "low sodium salt" OR "mineral salt" OR "KCl salt" OR "potassium chloride salt" OR "potassium enriched salt" OR "Potassium lactate" OR "magnesium‐enriched salt" OR "magnesium‐enriched salt" OR "sodium replacement" OR "salt replacement" OR "salt replacer" OR "salt replacers" OR "sodium chloride replacement" OR "sodium chloride replacer"
S4 S2 OR S3
S5 S1 OR S4
ClinicalTrials.gov (https://clinicaltrials.gov/)
Searched: 18 August 2021
Studies: All studies
Condition or disease: replacement OR replace OR substitute OR substituting OR substitution OR alternative OR reduce OR reduced OR reduction OR lower OR low
Other terms: salt OR sodium
WHO International Clinical Trials Registry Platform (ICTRP) (https://trialsearch.who.int/)
Searched: 18 August 2021
salt OR sodium in the title
replacement OR replace OR substitute OR substituting OR substitution OR alternative OR reduce OR reduced OR reduction OR lower OR low in the Intervention
Recruitment status is ALL
Appendix 2. Appendix 2. Simplified modelling approach to estimate absolute numbers and population impact
The importance of effects of interventions are best understood as absolute numbers rather than relative numbers. Rating certainty of evidence using GRADE in relation to thresholds (other than no effect) with a minimally contextualised approach, requires the use of absolute numbers ( Zeng 2021 ). This required us to estimate the absolute numbers of events prevented or caused for effectiveness outcomes expressed in relative terms. In addition, since changes in blood pressure are surrogate outcomes for cardiovascular health, we were also required to estimate absolute numbers of events prevented or caused by changes in key surrogate outcomes. We used a simplified model for this, making several simplifying assumptions, as detailed below.
Estimating absolute numbers and population impact
To be able to calculate the risk difference, number needed to treat for an additional beneficial outcome (NNTB) or number needed to treat for an additional harmful outcome (NNTH) and corresponding number of events prevented or caused, we needed to know the baseline risk of the disease or event that was being measured or estimated. This baseline risk is central to the impact of population‐level interventions, since small changes in a population with a large risk might have considerable impact. As we were investigating the intervention on a global scale, we used baseline risks from the WHO Global Health Estimates 2019 ( WHO 2020 ) and the Global Burden of Disease Study 2016 global incidence ( Institute for Health Metrics and Evaluation 2016 ) to inform our model. While this approach provided information for the global context, this approach did introduce our first simplifying assumption , i.e. that events of interest are homogeneously distributed across the world.
Due to the availability of global baseline risk information and the cardiovascular health focus of this review, the key outcomes to which we applied this approach were: change in diastolic blood pressure (DBP); change in systolic blood pressure (SBP); cardiovascular events: non‐fatal stroke; cardiovascular events: non‐fatal acute coronary syndrome; cardiovascular mortality and stroke mortality. This was only done for adults, since only blood pressure outcomes were reported in children; and age‐specific hazard ratios for blood pressure reductions were not sought for this age group.
1. Approach to estimating population impacts of changes in blood pressure
In order to estimate absolute numbers of events prevented or caused by changes in blood pressure in adults, we first needed to convert changes in blood pressure into corresponding relative risks. This was done by following the approach as described in Verbeek 2021 , using age‐specific hazard ratios for stroke and ischaemic heart disease (IHD) in relation to blood pressure reductions ( Lewington 2002 ). Though Lewington 2002 reported hazard ratios for the age categories 40‐49, 50‐59, 60‐69, 70‐79 and 80‐89, we used an average of only the last four categories (i.e. an average of 50‐89); this simple averaging of hazard ratios across age categories was our second simplifying assumption. Averaging hazard ratios for categories starting at 50 years instead of 40 years was necessary as the WHO Global Health Estimates 2019 did not report mortality baseline risk for the 40‐49 years category alone. However, a crude analysis suggested that the contribution of stroke mortality in the 30‐49 cohort was less than 1% of the total events reported for people aged 30‐70+. A third simplifying assumption was the use of stroke hazard ratios rather than IHD hazard ratios from Lewington 2002 , as stroke events were a prespecified outcome of this review.
Once corresponding relative risks were calculated, we used the WHO global estimates of stroke mortality, averaged for the 50‐59, 60‐69 and 70+ age categories, as baseline risk to calculate NNTB/NNTH and corresponding estimates of number of stroke deaths prevented or caused by the change in blood pressure. Therefore, our fourth simplifying assumption was that baseline risk of stroke mortality is homogeneously distributed across these age categories.
An example of this approach for the values obtained for change in diastolic blood pressure (DBP) in Comparison 1 ( Analysis 1.1 ) (mean difference (MD) ‐2.43 mmHg, 95% confidence interval (CI) ‐3.50 to ‐1.36) can be seen in Table 12 .
2. Approach to estimating population impacts of changes in cardiovascular events
We used a similar approach to estimating absolute numbers for relative effects, though we did not need to convert these measures using hazard ratios as for blood pressure. Our source data for baseline risk of non‐fatal outcomes were composite.
We used global incidence from the Global Burden of Disease Study 2016 ( Institute for Health Metrics and Evaluation 2016 ), providing the total number of global stroke and acute coronary syndrome (called ischaemic heart disease in this study) events in 2016. To calculate the number of non‐fatal events, we subtracted the number of corresponding cause‐specific mortality outcomes from the WHO Global Health Estimates 2015 ( WHO 2020 ), and divided by the total population at risk in 2015 (e.g. [total number of strokes in 2016 (GBD 2016) – total number of stroke mortality outcomes in 2015 (WHO 2015)]/[total population in 2015 (WHO 2015)]). The 2015 WHO estimates were used as they map closest to GBD 2016 estimates, representing our fifth simplifying assumption : i.e., that these two datasets are sufficiently comparable to use in this way. In addition, our sixth simplifying assumption was that the numbers estimated for 2015/2016 are still applicable five years later.
An example of this approach for the values obtained for non‐fatal stroke in Comparison 1 ( Analysis 1.33 ) (risk ratio (RR) 0.90 mmHg, 95% CI 0.80 to 1.01) can be seen in Table 13 .
3. Approach to estimating population impacts of changes in cardiovascular mortality
We used a similar approach to estimating absolute numbers for mortality outcomes to the approach detailed for non‐fatal events. The difference was the use of WHO Global Health Estimates 2019 only, as this dataset provided us with cause‐specific mortality data as well as population numbers. Therefore, we simply divided the number of events for a cause‐specific mortality outcome by the total population to estimate a baseline risk.
For example, the dataset reported 17,863,827 cardiovascular mortality outcomes in the world in 2019 and a total global population of 7.708 billion; corresponding to a 17,863,827/7,708,260,547 = 2.32 baseline risk in 2019.
Data and analyses
Characteristics of studies
Characteristics of included studies [ordered by study ID]
Characteristics of excluded studies [ordered by study ID]
Characteristics of studies awaiting classification [ordered by study ID]
Characteristics of ongoing studies [ordered by study ID]
Differences between protocol and review
Populations at risk of hyperkalaemia: The PICO was initially conceptualised into six comparisons by the WHO NUGAG Subgroup on Diet and Health, such that effects of LSSS in populations at possible risk of hyperkalaemia would be explored by stratifying the data into three comparisons in each of the subpopulations, namely adults, children, pregnant women possibly at risk of hyperkalaemia. During the guideline development process, the WHO NUGAG requested that effects of LSSS in populations at possible risk of hyperkalaemia rather be explored by subgroup analysis instead of by this stratification, and only for the safety outcomes (change in blood potassium, hyperkalaemia, hypokalaemia, adverse events, renal function and hyponatraemia).
The WHO NUGAG decided to exclude the effectiveness outcomes (change in DBP, change in SBP, hypertension, blood pressure control, cardiovascular events, cardiovascular mortality, all‐cause mortality, antihypertensive medication use, change in fasting blood glucose, change in blood triglycerides, change in total blood cholesterol, change in 24‐h urinary sodium and potassium excretion as well as diagnosis of diabetes mellitus and change in BMI (for adults) and growth changes, bone densitometry and bone health (for children)) from this subgrouping, since there are no clinical justifications to expect differences in effectiveness outcomes in the ‘at risk’ populations, and their ‘at risk status’ is related specifically to the safety outcomes, which are primarily linked to potassium metabolism. The sodium in most of the low‐sodium salt substitutes is partially replaced by potassium resulting in an increase in potassium intake with their use.
Change to original title: We removed 'renal health' from the original title to ensure the title more accurately reflected the focus of the primary cardiovascular outcomes of the review, since renal outcomes were only included as secondary outcomes.
Screening: The protocol stated that 'an initial electronic title screen using keywords to remove records that are obviously irrelevant' would be conducted. We replaced this with a full duplicate and independent screening process of all records yielded by the searches.
Measures of treatment effect: The search update in August 2022 resulted in the inclusion of a trial reporting event rates. In addition, a stepped‐wedge trial reporting hazard ratios for incident hypertension was included in the review. The analytical approaches to meta‐analysis of these data were added to the Methods section.
Additional outcomes : Two additional outcomes (change in 24‐hour urinary sodium and potassium excretion) were added by WHO NUGAG following the guideline development process, and were consequently incorporated into the review. A third outcome in children (all‐cause mortality) was erroneously omitted from the protocol and was included in the review.
Subgroups: During the guideline development process, it was decided that the subgroups total potassium intake and total sodium intake were not important to include, and were therefore excluded. The subgroup listed as duration of intervention in the protocol is included in the review as duration of study.
Sensitivity analyses: The protocol stated that 'We will also consider other potential sources of heterogeneity, such as methodological sources (using sensitivity analysis)', but did not specify the exact sensitivity analyses that were planned. Sensitivity analyses investigating the effect of excluding trials at high risk of bias and excluding trials with clusters as the unit of allocation on primary outcomes were reported in the methods conducted in the review.
Contributions of authors
The protocol for the review was drafted by CN, AB, MV and AS, in line with the PICO question developed by the World Health Organization (WHO) Nutrition Guidance Expert Advisory Group (NUGAG) Subgroup on Diet and Health. The protocol was approved by WHO, and the review was prospectively registered on the international prospective register of systematic reviews (PROSPERO 2020 CRD42020180162).
AB and CN drafted the review and MV and AS provided inputs to finalise the review. All authors approved the final manuscript.
Sources of support
Internal sources
No sources of support provided
External sources
World Health Organization, Other The World Health Organization (WHO) provided funding to Stellenbosch University towards the cost of carrying out this systematic review. Foreign, Commonwealth and Development Office, UK Project number 300342‐104 Research, Evidence and Development Initiative (READ‐It), UK READ‐It (project number 300342‐104) is funded by UK aid from the UK government.
Declarations of interest
AB: partly supported by the Research, Evidence and Development Initiative (READ‐It). READ‐It (project number 300342‐104) is funded by UK aid from the UK government; however, the views expressed do not necessarily reflect the UK government's official policies; partial support paid to my institution for a scoping review on total fat intake and health outcomes other than measures of unhealthy weight gain (2020); a systematic review on low sodium salt substitutes and cardiovascular health (2020‐2021); rapid scoping reviews on coconut and palm oil intake and cardiovascular health (2021); a scoping review on the health effects of tropical oil consumption (2022).
MV: partly supported by the Research, Evidence and Development Initiative (READ‐It). READ‐It (project number 300342‐104) is funded by UK aid from the UK government; however, the views expressed do not necessarily reflect the UK government's official policies; partial support paid to my institution for a scoping review on total fat intake and health outcomes other than measures of unhealthy weight gain (2020); a systematic review on low sodium salt substitutes and cardiovascular health (2020‐2021); rapid scoping reviews on coconut and palm oil intake and cardiovascular health (2021); a scoping review on health effects of tropical oil consumption (2022).
AS: partly supported by the Research, Evidence and Development Initiative (READ‐It). READ‐It (project number 300342‐104) is funded by UK aid from the UK government; however, the views expressed do not necessarily reflect the UK government's official policies.
CN: partly supported by the Research, Evidence and Development Initiative (READ‐It). READ‐It (project number 300342‐104) is funded by UK aid from the UK government; however, the views expressed do not necessarily reflect the UK government's official policies; partial support paid to my institution for a scoping review on total fat intake and health outcomes other than measures of unhealthy weight gain; a systematic review on low sodium salt substitutes and cardiovascular health; rapid scoping reviews on coconut and palm oil intake and cardiovascular health; a scoping review on the health effects of tropical oil consumption.
*CN is Co‐director of Cochrane Nutrition, and AB and MV are members of the Cochrane Nutrition local coordination team. These authors had no involvement in the editorial process for this review. | CC BY | no | 2024-01-16 23:36:46 | Cochrane Database Syst Rev. 2022 Aug 10; 2022(8):CD015207 | oa_package/97/5a/PMC9363242.tar.gz |
PMC9680264 | 36412589 | CC BY | no | 2024-01-16 23:35:05 | Diseases. 2022 Nov 1; 10(4):95 | oa_package/4f/7f/PMC9680264.tar.gz |
||||||||
PMC10067837 | 37005419 | Introduction
Archaea are a prokaryotic domain of life that structurally resemble, but are evolutionarily distinct, from bacteria. Apart from the sharp separation in the phylogenetic trees of universal genes (mostly encoding translation system components), archaea differ from bacteria in many major features including partly unrelated DNA replication and transcription machineries, different structures of membrane lipids and cell walls, and the corresponding, distinct enzymatic machineries involved in membrane and cell wall biogenesis, several unique coenzymes, and unique RNA modifications 1 . A remarkable diversity of archaeal cell morphology matching that of bacterial cells has been discovered including lobe-shaped Sulfolobus acidocaldarius 2 , filamentous Methanospirillum hungatei 3 and Thermofilum pendens 4 , rod-shaped Thermoproteus tenax 5 and Pyrobaculum aerophilum 6 , and even square-shaped Haloquadratum walsbyi 7 . The distinct shapes of these archaeal cells are maintained by the cell wall and cytoskeleton. However, despite the expanding research into archaeal biology including some limited cellular differentiation, such as cyst formation in Methanosarcina 8 , 9 , so far, there have been no reports of complex cellular differentiation accompanied by major morphological changes in archaea. This apparent lack of complex cellular differentiation in archaea contrasts the diverse different forms of cellular differentiation in bacteria that includes the formation of spores in bacilli and clostridia, heterocysts and akinetes in cyanobacteria, and particularly, complex differentiated colonies in myxobacteria and actinomycetes 10 .
In this work, we describe a group of haloarchaea displaying cellular differentiation into hyphae and spores, isolated from salt marsh sediment. The morphogenesis of this haloarchaea visually resembles that of Streptomyces bacteria. Comparative genomics suggests potential gain and loss of genes that might be relevant to this type of archaeal morphogenesis. A gene encoding a Cdc48-family ATPase, and a gene cluster encoding a putative oligopeptide transporter, might be involved in cellular differentiation, as suggested by multi-omic analyses of non-differentiated mutants. Remarkably, this gene cluster from haloarchaea can restore hyphae formation of a Streptomyces coelicolor mutant that carries a deletion in a homologous gene cluster ( bldKA - bldKE ). These results provide new knowledge on the morphology of archaea and enrich our understanding of the biological diversity and environmental adaptation of archaea. | Methods
Isolation, identification, morphological observation
The sediment samples were collected from Qijiaojing Salt Lake in Xinjiang Uygur Autonomous Region, China. Isolation was done using the standard dilution-plating technique on Modified Gause (MG) medium 11 containing (g/L): soluble starch, 5; lotus root starch, 5; KNO 3 , 1; MgSO 4 ·7H 2 O, 0.5; K 2 HPO 4 , 0.5; NaCl, 200; agar, 20; 1 mL trace solution (2% FeSO 4 ·7H 2 O; 1% MnCl 2 ·4H 2 O; 1% ZnSO 4 ·7H 2 O; 1% CuSO 4 ·5H 2 O; pH adjusted to 7.2). Plates were incubated at 37 °C for at least 6 weeks. All isolates with actinobacteria-like filamentous colonies were collected. In the identification of these isolates based on 16S rRNA gene, one isolate designated YIM 93972 was paid more attention, because its 16S rRNA gene could not be amplified using universal primers for the bacterial 16S rRNA gene, but the primers for archaeal 16S rRNA gene worked. Cell morphology of strain YIM 93972 was further examined using a scanning electron microscopy (Quanta 2000, FEI, Hillsboro, OR, USA) at different concentrations of NaCl. Cells from a 20-day-old culture of strain YIM 93972 were fixed for 30 min in OsO 4 vapour on ice, and filtered onto 2 μm isopore membrane filters (Millipore, Tokyo, Japan) modified by the method of Schubert et al 43 . The fixed cultures were dehydrated in a series of 50–100% ethanol mixtures, critical-point-dried in CO 2 and sputter-coated by 10 nm of gold-palladium. Cells were imaged using a XL30 ESEM-TMP (Philips-FEI, Eindhoven, Holland) at 3 kV. For testing the temperature tolerance of spores, spores were separated from substrate hyphae by virtue of a nitrocellulose transfer membrane (see below for details). Spore and mechanically broken substrate hyphae were used for preparing suspension (2 OD) and bathed at 60, 70, 80 and 90 °C for 15 min (control, 37 °C). Then, 100 μL suspension was spread on solid ISP 4 medium with 20% NaCl, and cultured for 30 days. Images were collected every 10 days.
For obtaining more strains like YIM 93972, the same isolation was performed on the soil samples collected from the Aiding, Large south, Dabancheng East, and Uzun Brac Salt Lakes in Xinjiang Uygur Autonomous region. Finally, five haloarchaea strains were isolated and confirmed with morphological differentiation (cultured on MG plates containing 20% NaCl at 37 °C for 3-4 week). Among them, YIM A00010, YIM A00011, and YIM A00014 were isolated from the Aiding Salt Lake samples; YIM A00012 and YIM A00013 were isolated from the Uzun Brac Salt Lake samples. For further understanding the diversity of YIM 93972 and its relatives in these four salt lakes, the 16S rRNA genes were amplified by using a pair of YIM 93972-specific primers (Forward primer: 5′ -GGGCGTCCAGCGGAAACC-3′ ; Reverse primer: 5′ -CCATCAGCCTGACTGTCAT-3′), directly from the total DNA of 15 soil samples. The PCR products were isolated from 2% agarose gel and purified using the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA) according to manufacturer’s instructions and quantified using QuantusTM Fluorometer (Promega, Madison, WA, USA). Purified amplicons were pooled in equimolar amounts and paired-end sequenced on an Illumina MiSeq PE300 platform (Illumina, San Diego, CA, USA) according to the standard protocols by Majorbio Bio-Pharm Technology Co. Ltd. (Shanghai, China) 44 .
Genomic sequencing
The total genomic DNA from strain YIM 93972 was extracted using the DNeasy Power Soil Kit (QIAGEN, Hilden, Germany). The quality of DNA was assessed by 1% agarose gel electrophoresis, and the purity were measured using NanoDropTM 2000 spectrophotometer (ThermoFisher Scientific, Waltham, MA, USA). Qualified genomic DNA (5 μg) was subjected to SMRT sequencing using a Pacific Biosciences RSII sequencer (Pacific Biosciences, Menlo Park, CA, USA). The Hierarchical Genome Assembly Process (HGAP, v2.3.0) pipeline was used to a generate high quality de novo assembly of the genome with default parameters 45 . The genomic DNAs of other five isolates were extracted as well as, and the quality was assessed using the same procedure. Then, the genomic DNAs were split into two fetches and sequenced using Oxford Nanopore (PromethION). The raw datasets were filtered, the subreads were assembled by Canu (v1.5) 46 , and further corrected by Pilon (v1.22) 47 . For further calibration the errors of the assembled genome using second generation sequencing datasets by Illumina NovaSeq 6000, and high-quality reads were obtained to generate one or more contigs without gaps. The rRNAs and tRNAs of YIM 93972 were predicted using RNAmmer (v1.2) 48 and tRNAscan-SE (v1.3.1) 49 , respectively. The gene predication and annotation were performed using Prokka (v1.14.6). The circular maps were generated by DNAPlotter (v1.11) 50 . The detail assembled and annotated results are shown in Supplementary Data 2a-f .
Comparative genome and phylogenome
The complete genomes of 122 Halobacteria isolates were downloaded from NCBI Genomes site. Additionally, contig level genome assemblies of a pleomorphic haloarchaeon Halocatena pleomorpha SPP-AMP-1 T (GCF_003862495.1) and a closely related non-morphogenetic haloarchaeon Halomarina oriensis JCM 16495 T (GCF_009791395.1), were downloaded from the same source. Protein sequences, encoded by these 124 publicly available Halobacteria genomes, as well as those encoded in the six genomes, obtained in this study, were analyzed to establish the orthology relationships between their genes (Supplementary_data_file_ 1 ). Initially, all protein sequences were clustered using MMseqs2 (v14-7e284) 51 with sequence similarity threshold of 0.5; then the clusters were further refined via several iterations of the following procedure: cluster alignments, obtained using MUSCLE (v5) 52 were compared to each other using HHSEARCH (v3.3.0); 53 clusters, displaying full-length similarity were merged together; approximate ML phylogenetic trees were reconstructed using FastTree (v 2.1.11) 54 for the merged cluster alignments 52 and rooted by mid-point; trees were parsed into subtrees, maximizing the ratio of taxonomic coverage (number of the distinct genome assemblies in the subtree) to the paralogy index (average number of sequences per assembly);
This procedure produced a set of 14,870 clusters (excluding singleton sequences) of orthologous genes (haloCOGs) that subsequently served for comparative evolutionary genomics analysis of Halobacteria genes. Functional annotation of haloCOGs was obtained by comparing haloCOG alignments to CDD 55 and arCOG 56 sequence profiles using HHSEARCH 53 .
268 haloCOGs with full complement of 130 genomes and at most 4 additional paralogs were used to determine the genome-level phylogeny of Halobacteria (Supplementary Data 3 , Supplementary_data_file_ 2 ). When paralogs were present, the index ortholog was selected based on the BLOSUM62 57 alignment score between the paralogs and the alignment consensus. Columns in the alignments of these 268 haloCOGs were filtered 58 for maximum fraction of gaps (0.667) and minimum homogeneity (0.05). The concatenated alignment contained 71,586 amino-acid sites. Phylogenetic tree was reconstructed using IQ-Tree (v2.2.0) 59 under the LG+F+R10 model, selected by the built-in model finder and rooted according to Rinke et al. 2021 (Supplementary _data_file_ 2 ) 60 .
The history of gene gains and losses (Supplementary Data 4 b, c ) in Halobacteria was reconstructed from the haloCOG phyletic patterns and the Halobacteria phylogenetic tree using GLOOME (v201305) 61 . Gains or losses of a gene on a particular edge of a phylogenetic tree were inferred from changes in the posterior probability of this gene presence between the ancestral and the descendant genomes at the respective ends of this edge; a change in probability exceeding 0.5 in magnitude was interpreted as a likely gain or loss of the gene (Supplementary Data 4 and 4c ).
Morphologically defective mutants screening and genome resequencing
To obtain the poorly differentiated and undifferentiated mutants, a nitroso-guanidine (NTG) based mutagenesis experiment was applied to the wild type of YIM 93972. Spore suspensions were prepared by adding sterile glass beads (5 mm diameter; 1 g per 10 mL medium) in ISP 4 medium containing (g/L): (soluble starch, 10; K 2 PO 4 , 1; MgSO 4 , 1; (NH 4 ) 2 SO 4 , 2; NaCl, 250) at 37 °C for 3 days, and then adjusted to 10 8 spores/mL. NTG treatment was carried out by incubating spore suspension with varied concentrations of NTG from 0.2 to 1 mg/mL at 30 °C for 30 min on an incubator shook at 90 g. Samples of 10 mL were harvested at 2400 g for 5 min, and spores were washed three times with 20 mL of 20% sterile NaCl. The spore lethality rates were measured by the LIVE/DEAD TM Bac LightTM bacterial viability kit (ThermoFisher Scientific, Waltham, MA, USA) as described previously 62 . The NTG treated spores with ~85% lethality rates were diluted to 10 2 and 10 3 per milliliter, and then 100 μL aliquots were equally spread on ISP 4 medium plates containing 25% NaCl and then incubated at 37 °C for 28 days. The mutants poorly differentiated (transitional) and that undifferentiated (bald) were cultured and passaged four times for further multi-omics study (Supplementary Fig. 10a ). The morphological phenotype of many mutants recovered in the next sub-cultivation. So, after 4 generations, only 5 transitional mutants and 3 bald mutants remained with stable morphological mutant phenotypes. To further confirm the morphological mutation of strain YIM 93972, we also checked the cell morphology by scanning electron microscopy as described above. Compared with wild type colonies, two transitional colonies had branched substrate mycelia and very rare aerial mycelia. However, the three bald colonies only had branched substrate mycelia (Supplementary Fig. 10b ).
Comparative transcriptome
For collecting the aerial and substrate hyphae of wild type, viable spores 10 7 per mL of strain YIM 93972 were collected after full sporulation of the aerial hyphae (28 days of solid medium incubation) and spread on a layer of 0.20 μm BioTrace NT Nitrocellulose Transfer Membrane (Pall China, Beijing, China) covered on a medium ISP 4 containing 20% NaCl and 2% agar in 85 mm diameter Petri dish. The plates were cultured at 37 °C for 21 days. Totally, 96 culture plates were prepared, which were split into 6 groups as biological replicates. The aerial hyphae grown on nitrocellulose membrane, and the substrate hyphae penetrated through nitrocellulose membrane were recovered by scraping with a plain spatula, respectively. For two transitional and three bald mutants, the equal amounts of mechanically broken hyphae were spread on the same plates as described above. Totally 192 culture plates were prepared for each mutant, which were split into 6 groups as biological replicates. The substrate hyphae were recovered at 28 days. Totally, 6 pooled hyphae were collected for each strain, half of them were used for transcriptomic analysis, and another half were used for proteomic analysis.
The total RNA was extracted and analyzed as described previously 63 . Briefly, total RNA was extracted with Trizol method. The TruSeq TM Stranded Total RNA Library Prep Kit (Illumina, San Diego, CA, USA) was used to prepare the cDNA libraries, followed by paired-end 100 bp sequencing on an Illumina HiSeq 2000 system (Illumina, San Diego, CA, USA). The RNA-Seq reads per sample were mapped to the reference using Bowti2 (v 2.4.2) 64 . RSEM 65 (v1.3.3) was used to estimate TPM. SNP analysis was performed by SAM tools 66 (v1.12). The detailed quality control was showed in Supplementary Data 6 and Fig. 3b . Genes with ratios greater than 1.5-fold and p- value smaller than 0.05 (Student’s t -test) were considered as regulated and used for further bioinformatics analysis (Fig. 3d and Supplementary Data 9 ).
Comparative proteome
For proteomic analysis, the aerial hyphae and substrate hyphae were suspended in lysis buffer [9 M Urea, 10 mM Tris-HCl (pH 8.0), 30 mM NaCl, 5 mM iodoacetamide (IAA), 5 mM Na 4 P 2 O 7 , 100 mM Na 2 HPO 4 (pH 8.0), 1 mM NaF, 1 mM Na 3 VO 4 , 1 mM sodium glycerophosphate, 1% phosphatase inhibitor cocktail 2, 1% phosphatase inhibitor cocktail 3, EDTA-free protease inhibitor cocktail (1 tablet/10 mL lysis buffer)] and disrupted by a Soniprep sonicator (Scientz, Ningbo, Zhejiang, China) for 10 min (2 s on and 4 s off, amplitude 30%) as described 67 . The lysates were centrifuged at 16,200 g for 10 min at 4 °C to remove debris. The quality and concentration of extracted total cell lysates (TCL) were detected by a 10% SDS-PAGE (Supplementary Fig. 11a ). The technical replicates were designed in wild, transitional, and bald groups, respectively.
Same amount of pooled proteins (120 μg) from each sample was reduced with 5 mM of dithiothreitol (DTT) at 45 °C for 30 min, followed by alkylation with 10 mM of iodoacetamide (IAA) at room temperature for 30 min. The alkylated samples were precleaned with a 10% SDS-PAGE (0.7 cm) 68 , and in-gel digested with 12.5 ng/μL trypsin 69 at 37 °C for 14 h. Equal amount of peptides from each differentiated condition were used for 10-plex TMT labeling (ThermoFisher Scientific, Waltham, MA, USA). TMT labeling was carried out as follows: wild substrate hyphae (W-SH) technical replicates were labeled with 126 and 127 N ; wild type aerial hyphae (W-AH) with 127 C ; transitional substrate hyphae (T-SH) technical replicates with 128 N and 128 C , T-SH biological replicate with 129 N ; bald substrate hyphae (B-SH) technical replicates with 130 N and 130 C , B-SH biological replicates with 129 C and 131 following the manufacturer’s protocol (Fig. 4a ). The labeled peptides were mixed and dried with a vacuum dryer (LABCONCO CentriVap, Kansas City, MO, USA).
The mixed TMT labeled peptides were fractionated by a Durashell C 18 high pH reverse phase (RP) column (150 Å, 5 μm, 4.6 × 250 mm 2 , Bonna-Agela Technologies Inc., Newark, DE, USA) on a Rigol L-3120 HPLC system (Beijing, China) as described previously 67 . Briefly, the solvent gradients comprised buffer A (98% double distilled H 2 O and 2% ACN, pH 10, adjusted by ammonium hydroxide) and buffer B (2% double distilled H 2 O and 98% ACN, pH 10). The mixed samples were dissolved in buffer A. After loading, the peptides were separated with a 60 min linear gradient (0% B for 5 min, 0-3% B for 3 min, 3–22% B for 37 min, 22–32% B for 10 min, 32-90% B for 1 min, 90% B for 2 min, and 100% B for 2 min). The LC flow rate was set at 0.7 mL/min. The column was maintained at 45 °C. Eluent was collected every 1 min (Supplementary Fig. 11b ). The 60 fractions were dried and concatenated to 10 fractions (Supplementary Fig. 11c ). The combined peptides were subjected to LC−MS/MS analysis.
The fractions were analyzed on a Q Exactive HF mass spectrometer (ThermoFisher Scientific, Waltham, MA, USA) after Easy-Nano LC 1200 separation (ThermoFisher Scientific, Waltham, MA, USA). Briefly, samples were loaded onto a self-packed capillary column (75 μm i.d. × 50 cm, 1.9 μm C 18 ) and eluted with a 135 min linear gradient (4–8% B for 13 min, 8–25% B for 86 min, 20–50% B for 21 min, 50–90% B for 3 min, 90% B for 12 min). Full MS scans were performed with m/z ranges of 375–1,400 at a resolution of 1.2 × 10 5 ; the maximum injection time (MIT) was 80 ms, and the automatic gain control (AGC) was set to 3.0 × 10 6 . For the MS/MS scans, the 15 most intense peptide ions with charge states of 2 to 6 were subjected to fragmentation via higher energy collision-induced dissociation (HCD) (AGC: 1 × 10 5 , MIT: 100 ms, Resolution: 6 × 10 4 ). Dynamic exclusion was set as 30 s.
All of the raw files were searched with MaxQuant (v1.5.6.0) against the protein database from strain YIM 93972 (3744 entries) along with 245 common contaminant protein sequences ( http://www.maxquant.org ). Fully tryptic peptides with up to two miss cleavage sites were allowed. Oxidation of methionine was set as dynamic modification, whereas carbamidomethylation of cysteine and TMT modification at peptide N-terminus and lysine were set as static modifications. Figure 4b showed the MS identification and quantification. The detail quality control was showed in Supplementary Fig. 11d, e . Nine protein expression clusters were showed in Fig. 4e . Protein changes greater than 1.5-fold and p- value smaller than 0.05 (Significance A 70 ) were considered as regulated, which were further used for bioinformatics analysis (Fig. 4e and Supplementary Data 12 ).
Heterologous expression and functional analysis
To characterize the putative oligopeptide transporter encoded by ORF _ 2669 - ORF_2673 from YIM 93972, a bialaphos (from 0.1 μg/mL to 1.0 μg/mL with 0.1 as interval) resistance test 34 , 67 was performed using solid ISP 4 medium. Then, the heterologous expression of genes ORF_2669-ORF_2673 in Streptomyces coelicolor M145 71 with a deletion of gene cluster bldKA-KE was conducted. The ΔbldKA - KE mutant was constructed by the CRISPR-Cas9 genome editing method as previously described 72 . The strains and plasmids used were listed in Supplementary Data 15 , and the primers in Supplementary Data 16 . Briefly, the upstream (1139 bp) and downstream regions (1310 bp) of bldKA - KE were amplified using the primer pairs bldKA - KE -up-F/R and bldKA - KE -down-F/R, respectively. The single guide RNA (sgRNA) transcription cassette was obtained from the plasmid pKCCas9dO with the primers bldKA - KE -gRNA F/R. Then, three above fragments were assembled by an overlapping PCR with the primers bldKA - KE -gRNA-F and bldKA - KE -down-R, followed by double digestion with Spe I and Hind III. The assembled fragment was cloned into pKCCas9dO and was introduced into the wild strain M145 by conjugal transfer. The constructed strain was subsequently grown on solid MS medium without apramycin at 37 °C to remove the plasmid pKCcas9- bldKA - KE . The correct double crossover colonies were verified with the primers bldKA - KE -J-F/R, yielding the ΔbldKA - KE mutant strain.
Using M145 genomic DNA as the template, the bldKA - KE from M145 and ORF _ 2669 - ORF_2673 from YIM 93972 were obtained with the primer pairs bldKA - KE- F/R and ORF _ 2669 - ORF_2673 respectively. The amplified products were cloned in the pIB139 vector to generate the complemented plasmids pIB139- bldKA - KE and pIB139- ORF _ 2669 - ORF_2673 . The obtained plasmids were introduced into the ΔbldKA - KE mutant, resulting in two complemented strains, the ΔbldKA - KE /pIB139- bldKA - KE and ΔbldKA - KE /pIB139- ORF _ 2669 - ORF_2673 strains. The empty vector pIB139 was transferred into the M145 strain, generating the positive control. To evaluate the role of oligopeptide ABC transporter ORF _ 2669 - ORF_2673 for controlling mycelium differentiation as bldKA - KE in S. coelicolor , the same viable spores of the four strains, M145/pIB139, ΔbldKA - KE /pIB139, ΔbldKA - KE /pIB139- bldKA - KE and ΔbldKA - KE /pIB139- ORF _ 2669 - ORF_2673 were inoculated and grown on an ISP 4 agar plate at 37 °C. About 700 visible clones were grown on each agar plate. The substrate and aerial mycelium from a 78 h old culture of the above four constructed strains were examined by a scanning electron microscopy (Nova NanoSEM 450, FEI, USA).
Characterization of cell wall S-layer proteins of YIM 93972
The S-layer proteins of YIM 93972 were extracted as previously described 73 . The mid-log phase cells were harvested by centrifugation at 3500 g , and pellets were resuspended in cell lysis buffer containing 20 mM Tris-HCl (pH 6.5), 1.0 mM phenylmethylsulfonyl fluoride (PMSF), and 1x protease inhibitor cocktail (Roche, Basel, Switzerland), and disrupted by a Soniprep sonicator (2-s on and 4-s off, amplitude 25%) for 10 min. Cell debris were removed by centrifugation at 4000 g for 15 min at 4 °C. The supernatant was further ultracentrifuged for 1 h at 250,000 g and 4 °C. Then the pellet was dissolved in a buffer containing 4% SDS, 50 mM Tris (pH 8.0) and 20 mM DTT to obtain cell wall proteins. 20 μg of cell wall proteins were reduced with 5 mM of DTT at 45 °C for 30 min and alkylated with 10 mM of IAA at room temperature for 30 min. After separated with a 12% SDS-PAGE for 5 cm, the whole lane was sliced into 25 fractions based on the distinguishing bands on the Coomassie blue G-250 stained gel. The gel pieces were further excised into 1 cubic millimeter gels and digested in-gel with trypsin (12 ng/μL) at 37 °C for 14 h. The tryptic peptides were analyzed by an LTQ-Orbitrap Velos mass spectrometer after high pH RP chromatography separation. The acquired raw files were then submitted to the MaxQuant (v1.5.6.0) against the protein database from strain YIM 93972 (3744 entries) along with common contaminant protein sequences. Two S-layer proteins were detected by peptide profile matching with the annotated protein database of YIM 93972 based on peptide numbers ≥2, including ORF_1400 and ORF_1704 (Supplementary Fig. 9a–d ).
Statistics and reproducibility
The strain culture, scanning EM image, transmission EM image, were performed at least three independent experiments. Statistical differences between two groups in transcriptomic and proteomic datasets were analyzed using two-tailed unpaired t -tests (inter-group) and Significance A 70 (intra-group), respectively. P < 0.05 was considered statistically significant. Data statical analysis and visualization were realized by R (v4.1.2).
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Results and discussion
A new haloarchaeal lineage with complex morphological differentiation
During the investigation of actinobacterial diversity from hypersaline environments, an unexpected hyper-halophilic archaeon with a distinct growth pattern was discovered serendipitously from the Qijiaojing Salt Lake in the Xinjiang Uygur Autonomous Region of China. This novel strain, designated YIM 93972, was isolated from a soil sample using the standard dilution-plating technique at 37 °C for four weeks on Modified Gause (MG) agar plates supplemented with 20% NaCl (w/v), which previously has been mainly used to isolate halophilic actinomycetes 11 . The colonies of YIM 93972 comprised yellow, orange, and red branching filaments with white aerial hyphae, underwent complex cellular differentiation (Fig. 1a , Supplementary Fig. 1a ), and produced spores (0.5–0.7 × 1.0–1.2 μm in size) in both solid (Fig. 1b I-III) and liquid media (Fig. 1b IV). As other spore-forming strains 12 , the spores of this new halophilic archaeon were resistant to heat stress up to 70 °C. In contrast, substrate hyphae (SH) were resistant up to 60 °C (Supplementary Fig. 1b ). The higher resistance to heat stress of spores might enable them to survive under extreme saline conditions by their protective and repair capabilities. Optimal growth of YIM 93972 was observed at 40–45 °C, 3.8–4.3 M NaCl and 7.0–7.5 pH (Supplementary Data 1 ). Transmission electron microscopy (TEM) revealed the different stages of spore differentiation, from cell constriction, septa formation, and rod separation to chain formation (Fig. 1b V and VI). Thus, the cellular differentiation of this new haloarchaeon includes the growth of substrate and aerial hyphae and spore formation, which resembles the morphogenetic development of Streptomyces .
Analysis of the 16S rRNA gene indicated that YIM 93972 was closely related to Halocatena pleomorpha SPP-AMP-1 T , with 96.65% identity, and the two species are likely to represent a distinct lineage within the family Halobacteriaceae . Although YIM 93972 displays morphological features resembling those of actinomycetes, it is otherwise a typical archaeon that, in particular, contains glycerol diether moieties (GDEMs) and phosphatidylglycerol phosphate methyl ester (PGP-Me) in membrane lipids and an S-layer cell wall (Fig. 1c, d , Supplementary Fig. 2 , Supplementary Data 1 ). In addition, genome analysis and proteomic data indicate that, like many extreme halophilic archaea, YIM 93972 produces halorhodopsin (Supplementary Fig. 3 ), a 7-transmembrane protein that function as a light-driven ion transporter. However, YIM 93972 lacks the genes encoding enzymes involved in the biosynthesis of bacterioruberin (in particular, lycopene elongase, halo.01227), a C 50 carotenoid that has been identified in several extremely halophilic archaea, the production of which is regulated by halorhodopsin 13 , 14 .
To investigate the ecological distribution of YIM 93972 and its relatives, we collected 15 soil samples from the Aiding Lake, Large South Lake, Dabancheng East Lake, and Uzun Brac Lake of Xinjiang Uygur Autonomous Region and amplified the 16S rRNA genes from environmental DNAs using a pair of PCR primers designed based on the 16S rRNA gene sequence of YIM 93972. In two soil samples from Aiding Salt Lake, 47 sequences of 16S rRNA gene were obtained and shown to form a clade together with YIM 93972 (OTU13 in Supplementary Fig. 4a , sequences in Supplementary Data 18 ). Thus, YIM 93972 and related haloarchaea are widespread in the Salt Lake environment. Additionally, an attempt was made to isolate archaea related to YIM 93972 from the same soil samples from these Salt Lakes using the same method on Modified Gause agar plates. Finally, three strains YIM A00010, YIM A00011, and YIM A00014 from the Aiding Salt Lake as well as two strains YIM A00012 and YIM A00013 from the Uzun Brac Salt Lake were confirmed as novel archaeal strains with morphological differentiation (Fig. 2a ).
The phylogenetic tree of 16S rRNA showed that H. pleomorpha SPP-AMP-1 T isolated recently from a man-made saltpan site in India 15 and Halomarina oriensis JCM 16495 T isolated from a seawater aquarium in Japan 16 were the two known species most closely related to the six new strains with morphological differentiation (Supplementary Fig. 4b ). Cells of H. oriensis JCM 16495 T were irregular coccoid or discoid shapes, but those of H. pleomorpha SPP-AMP-1 T were pleomorphic in addition to rod-shaped. These two strains and the other two pleomorphic strains ( Haloferax volcanii CGMCC 1.2150 T 17 and Haloplanus salinarum JCM 31424 T 18 ) were collected for morphological comparison. On the same ISP 4 medium containing 20% NaCl, only H. pleomorpha SPP-AMP-1 T exhibited differentiated morphology similar to that of YIM 93972 (Fig. 2a , Supplementary Fig. 5 and 6 ). Collectively, these observations demonstrated that a distinct group of morphologically differentiating (hereafter morphogenetic, for brevity) haloarchaea is widespread in various hypersaline environments.
Genomic features of morphogenetic haloarchaea
To gain further insight into the biology of morphogenetic haloarchaea (referring specifically to haloarchaea with hyphae differentiation and spore formation), the complete genomes of YIM 93972 and other five morphogenetic strains were sequenced. The genome of YIM 93972 is composed of a major chromosome (2,676,592 bp, 57.2% G + C content, 2803 predicted genes), a minor chromosome (844,905 bp, 54.7% G + C content, 722 predicted genes), and three plasmids (Supplementary Fig. 7 ; Supplementary Data 2a ). Altogether, the genome of YIM 93972 encompassed 3744 predicted protein-coding genes, 47 tRNAs, and two rRNA operons (the two 16S rRNA genes shared 99.8% identity) located on the major and minor chromosomes. Although all six genomes of the morphogenetic haloarchaeal strains consisted of two chromosomes of different sizes, otherwise, we identified many distinct genomic features (Supplementary Data 2 , Supplementary Data 2b–f ). In particular, the genome sizes of YIM A00010 and YIM A00011 (5.97 Mb and 5.82 Mb, respectively) are substantially larger than those of the other four strains (3.23 Mb on average). Among the six strains, only YIM A00013 had two rRNA operons like YIM 93972, but the genome of the former carried fewer plasmids and encompassed fewer genes than the latter.
We then constructed a set of 14,870 clusters of haloarchaeal orthologous genes (haloCOGs; Supplementary_data_file_ 1 ). Phylogenomic analysis of 268 low-paralogy haloCOGs that are universally conserved in 130 genomes of class Halobacteria (Supplementary Data 3 ) showed that YIM 93972 and its morphogenetic relatives including H. pleomorpha , formed a clade with Halomarina oriensis , within the family Halobacteriaceae (Fig. 2b , Supplementary_data_file_ 2 ). The morphogenetic haloarchaea shared a core of 2,053 strictly conserved orthologous genes, but lacked 336 genes that are common in other Halobacteria (Supplementary Data 3 , Supplementary Data 4a ). For 3208 of the 3744 predicted genes of YIM 93972, an ortholog was identified in at least one non-morphogenetic haloarchaeon. After mapping the patterns of gene presence-absence in haloCOGs onto the species tree, we obtained a maximum likelihood reconstruction of the history of gene gains and losses in class Halobacteria . The ancestor of the morphogenetic haloarchaea was estimated to contain 3903 haloCOGs, of which 1420 were represented in the tree branch that separates the morphogenetic clade from the common ancestor with the sister clade, whereas 339 new genes were gained (Supplementary Data 4b ).
Several gene losses appeared to be relevant for the morphogenesis of these new haloarchaea, including the hyphae formation, and lack of cell motility (Supplementary Data 4c ). These include the loss of CetZ (FtsZ3) which is involved in controlling the rod-like cell shape in non-morphogenetic haloarchaea 19 , the archaellum and another halobacterial type IV pili system, typified by the PilB3 locus of Haloferax volcanii 20 , which is represented in Halobacteria and Methanomicrobia 20 . Rod-shaped cells were not observed in morphogenetic haloarchaea, suggesting a functional link between the loss of cetZ and complex hyphae differentiation. Conversely, in morphogenetic haloarchaea, the halo.02163 family of MreB-like proteins is expanded, whereas the halo.02581 family of MreB-like proteins that is common in other haloarchaea is lost. Given that MreB is the key cell shape determinant in bacteria 21 , these findings further emphasize specific changes in cytoskeleton organization that are likely to be relevant for the pleomorphism of these archaea (Supplementary Fig. 8 ).
Among the gains, there are 61 genes that are strictly conserved in all 7 morphogenetic haloarchaea and absent in other Halobacteria in our genome set. These gained genes included a distinct homolog of Cdc48, an AAA+ ATPase (halo.06695, ORF_0238) . The proteins of this superfamily play key roles in a variety of cellular processes, including cell-cycle regulation, DNA replication, cell division, the ubiquitin-proteasome-system and others 22 , 23 and might contribute to the morphological differentiation. Another notable case is a M6 family metalloprotease (halo.06530 and halo.06177, ORF_0582 and ORF_1236 ), a distant homolog of InhA1, a major component of the exosporium in Bacillus spores 24 . In most archaea, the S-layer is the exclusive cell envelope component that plays a crucial role in surface recognition and cell shape maintenance 25 , 26 . The morphogenetic haloarchaea shared a distinct of S-layer-forming glycoprotein (halo.06692, ORF_1400 ), which might be related to the unique morphology. Furthermore, proteomic data showed that YIM 93972 produced two varieties of S-layer proteins encoded by ORF_1400 and ORF_1704 (Supplementary Fig. 9a–d ). For most of the remaining genes gained and, in some cases, duplicated in the morphogenetic haloarchaea, there was no functional prediction or only a generic prediction, although notably many of these genes encoded predicted transmembrane or secreted proteins (Supplementary Data 4b, d ). Several of these proteins contain aspartate-rich repeats that could coordinate Ca 2+ ions, which are known to activate spore germination in sporulating actinobacteria 27 .
Sporulating bacteria employ a distinct set of small molecules for energy storage and spore protection. We identified genes present in all 7 genomes of morphogenetic haloarchaea that are involved in trehalose biosynthesis and utilization 28 , dipicolinic acid biosynthesis 29 and poly(R)-hydroxyalkanoic acid biosynthesis 30 (Supplementary Data 3 ). Comparative omics data indicated that genes involved in the biosynthesis of small molecules implicated in sporulation, namely, poly(R)-hydroxyalkanoic acid synthesis ( ORF_2608 ) and 4-hydroxy-tetrahydrodipicolinate ( ORF_1392 ), were upregulated 3.6 and 7.8 folds at the mRNA, and 1.3 and 1.6-fold at the protein level, respectively. This observation suggests that poly(R)-hydroxyalkanoic acid could be a key storage molecule in the spores. Polyhydroxyalkanoates also accumulate in hyphae and spores in some actinobacterial species 30 . However, all these genes are also widespread in non-morphogenetic Halobacteria . Another notable gene module present in all genomes of morphogenetic haloarchaea consists of a MoxR family ATPase and a von Willebrand factor type A (vWA) domain containing protein, which is strongly associated with bacterial multicellularity 31 .
Random mutagenesis and comparative multi-omics uncover several morphogenetic genes
Seeking to identify the genetic basis of cellular differentiation in morphogenetic haloarchaea, we performed nitroso-guanidine (NTG) chemical mutagenesis on spores of YIM 93972. After four generations of serial cultivation from the original 1110 mutant colonies, we obtained five transitional (weak aerial hyphae) and three bald (no aerial hyphae) mutants (Supplementary Fig. 10a, b ). Analysis of the genome sequences of these 8 mutants identified two mutations in protein-coding gene sequences and one mutation upstream of a gene in all three bald mutants; and one mutation upstream of a gene in three of the five transitional mutants (Supplementary Fig. 10c , Supplementary Data 5 ). In the three bald mutants, the two intragenic mutations were the non-synonymous G1067A substitution in ORF_0238 (ATPase of the AAA+ class, Cdc48 family), and synonymous C309T substitution in ORF_0964 (RNA methyltransferase, SPOUT superfamily), whereas the third mutation was the T136G substitution in the non-coding region upstream of ORF_2797 (ParB-like DNA binding protein). In five transitional mutants, the mutation shared by three mutants (T2, T4, and T5) was C69T in the intergenic region upstream of ORF_1717 (Uncharacterized protein).
Given the lack of an experimental genetic system for YIM 93972, we performed whole transcriptomic (Fig. 3a , Supplementary Data 6 – 9 ) and quantitative proteomic analyses (Fig. 4a , Supplementary Fig. 11 , Supplementary Data 10 – 12 ) for two transitional and three bald mutants to further characterize the potential association of the identified mutations with the mutant phenotypes. We identified the G1067A substitution in ORF_0238 in the transcriptome of all transitional mutants (except for two biological replicates of mutant T2) and bald mutants but not the wild-type transcriptome (Supplementary Fig. 10d , Supplementary Data 7 ). Furthermore, ORF_0238 was downregulated 3.9 fold ( p < 0.001) at the protein level in the bald mutants (Supplementary Data 11 - 12 ). Phylogenetic analysis of the Cdc48 family indicated that ORF_0238 belonged to a distinct branch specific for morphogenetic haloarchaea (Supplementary Fig. 12 ). Taken together, these observations suggest that ORF_0238 might be functionally important for sporulation and morphological differentiation in YIM 93972.
To further characterize the genetic basis of hyphal differentiation, we compared the transcriptome and quantitative proteome of the wild type from both aerial hyphae and substrate hyphae with those of morphogenetic mutants from substrate hyphae only (given the absence of aerial hyphae) (Figs. 3 a, b and 4a–c , Supplementary Fig. 11a–e ). Quantitative analysis revealed 107 genes (Fig. 3c–d , Supplementary Data 9 , Supplementary Data 19 ) and 118 proteins (Fig. 4d, e , Supplementary Data 12 , Supplementary Data 20 ) that were significantly upregulated in the aerial hyphae of wild type compared to the substrate hyphae of both wild and mutant types. The differentially expressed genes and proteins play major roles in translation, transcription, metabolism, and ion transport (Figs. 3 e and 4f , Supplementary Data 13 ). In particular, ATP-binding cassette (ABC) peptide transporter operon ( ORF_2669-ORF_2673 on chromosome I) was significantly upregulated in wild-type aerial hyphae (Fig. 5a, b , Supplementary Data 14 ). In this operon, the peptide-binding protein ( ORF_2669 ) was significantly upregulated 6.91 and 1.6 fold at the mRNA and protein levels, respectively (Fig. 5c , Supplementary Data 21 ).
It has been shown that Streptomyces coelicolor employs a distinct ATP-binding cassette (ABC) transporter to import oligopeptides that serve as signals for aerial mycelium formation 32 . To test whether YIM 93972 similarly used ABC transporters for importing peptides into the mycelia, we tested the resistance of transitional and bald mutants to bialaphos, an antibiotic that enters cells via oligopeptide permeases 33 . We found that only the wild type was sensitive to bialaphos (Fig. 5d ) whereas all mutants were resistant to this antibiotic up to concentrations less than 1.0 μg/mL (Supplementary Fig. 13 ). Despite the low protein sequence similarity with the S. coelicolor oligopeptide permease (Fig. 6a), we hypothesized that the ORF_2669-ORF_2673 operon of YIM 93972 is involved in the aerial mycelium and spore formation. Deletion of the oligopeptide-transport operon ( bldKA-KE ) caused a bald phenotype (only present substrate mycelium) in S. coelicolor 34 . We introduced the entire ORF _ 2669 - ORF _ 2673 operon of YIM 93972 into the S. coelicolor M145 ΔbldKA - KE /pIB139 mutant (Supplementary Data 15 , 16 ) and found that aerial mycelium formation and sporulation were restored (Fig. 6b ). Thus, the ORF _ 2669 - ORF _ 2673 operon of YIM 93972 might be involved in aerial mycelium formation by importing signaling oligopeptides, and could be functionally equivalent to the bldKA - KE operon of S. coelicolor .
In Actinobacteria, Cyanobacteria and the sporulating Firmicutes, multiple transcriptional regulators (TRs) are involved in complex cell differentiation 35 – 38 . In our transcriptomic and proteomic data, 47 TRs were found to be significantly up-or down-regulated in aerial hyphae comparted to substrate hyphae (Supplementary Data 17 ). These included four TRs regulating phosphate uptake ( ORF_0890 , ORF_0891 , and ORF_1046 ) as well as ORF_1998 , the cell fate regulator YlbF (YheA/YmcA/DUF963 family), which were all significantly upregulated. The ortholog of ORF_1998 is involved in competence development and sporulation in Bacillus subtilis 39 . Another upregulated gene, ORF_ 1932 , encodes the global TR BolA, which is a key player in biofilm development 39 – 41 . Furthermore, the AbrB family TRs encoded in the same operon with genes involved in poly(R)-hydroxyalkanoic acid biosynthesis, the energy storage molecule in spores, was also upregulated. These findings are compatible with the major roles of distinct TRs in the cellular differentiation of morphogenetic archaea.
In summary, we report the biological, genetic and biochemical characteristics of a haloarchaeon that displays complex cellular differentiation. Although the mechanism of this differentiation remains to be elucidated, our findings implicate several genes, in particular, a distinct Cdc48-like ATPase that was mutated in all non-differentiating mutants. The observation that the morphogenetic haloarchaea are closely related to each other and, in the phylogenetic tree of Halobacteria , form a distinct clade within the family Halobacteriaceae suggests a relatively recent origin of this complex phenotype. It remains to be shown whether complex cellular differentiation, which evolved independently in several groups of bacteria, also independently emerged in other archaea.
Taxonomic description
As mentioned above, phylogenetic analysis of 16S rRNA and concatenated conserved genes indicated that strain YIM 93972 formed a clade with the closest related species Halocatena pleomorpha within family Halobacteriaceae . The average nucleotide identity based on BLAST (ANIb) value between YIM 93972 and H. pleomorpha SPP-AMP-1 T was 70.59%, which was far below the cut-off values <75 % for genus demarcation in the class Halobacteria 42 . Further, the low AAI (68.4%) between YIM 93972 and H. pleomorpha SPP-AMP-1 T confirms the novelty of YIM 93972 at the genus level 42 . The substrate mycelium of YIM 93972 formed sporangia, and the aerial mycelium formed spore chains at maturity and the spores were cylindrical with wrinkled surfaces. The respiratory quinones of strain YIM 93972 are MK-8 and MK-8(H 2 ) (Supplementary Fig. 14 ). By virtue of having a different isomer for MK-8 (H 2 ), the respiratory quinones of YIM 93972 differ from those of H. pleomorpha SPP-AMP-1 T , which contains only menaquinone MK-8. The major polar lipids of the strain were chromatographically identified as phosphatidylglycerol, phosphatidylglycerolphosphate methyl ester and five unidentified glycolipids, while H. pleomorpha SPP-AMP-1 T has phosphatidylglycerol, phosphatidylglycerolphosphate methyl ester, glycosyl mannosyl glucosyl diether and sulphated glycosyl mannosyl glucosyl diether. Besides the phylogenetic and morphological differences, the isolate is differentiated from the closely related genera by its chemotaxonomic markers (Supplementary Data 1 ). Based on the above results, YIM 93972 T merits representation as a new species in a new genus within the family Halobacteriaceae , for which the name Actinoarchaeum halophilum gen. nov., sp. nov. is herewith proposed.
Description of Actinoarchaeum gen. nov
Actinoarchaeum (Ac.tino.ar.chae’um. Gr. n. aktis - inos , a ray; N.L. neut. n. archaeum (from Gr. adj. archaios - ê - on , ancient), archaeon; N.L. neut. n. Actinoarchaeum ray archaeon, referring to the radial arrangement of filaments).
Aerobic, extremely halophilic, Streptomyces -like colonies that produce brown substrate mycelia forms terminal sporangia with white aerial mycelia and white spore on modified ISP 4 medium. The major polar lipids are phosphatidylglycerol, phosphatidylglycerol phosphate methyl ester and five unidentified glycolipids. The predominant menaquinones are MK-8 and MK-8(H 2 ). The G + C content of the genomic DNA is about 56.3 mol%. The type species is Actinoarchaeum halophilum . Recommended three-letter abbreviation: Aah .
Description of Actinoarchaeum halophilum sp. nov
Actinoarchaeum halophilum (ha.lo’phi.lum. Gr. n. hals halos, salt; N.L. adj. philus - a - um , from Gr. adj. philos - ê -on, friend, loving; N.L. neut. adj. halophilum , salt-loving).
Morphological, chemotaxonomic, and general characteristics are as given above for the genus. Cells require 2.1–6.0 M NaCl, pH 6.0–9.0, 25–50 °C and Mg 2+ (0.01–0.7 M) for growth. Optimal growth occurs at 3.8–4.2 M NaCl, pH 7.0–7.5, 40–45 °C. Oxidase-weakly positive and catalase-negative. Positive for nitrate reduction, hydrolysis of Tweens (20, 40, 60, 80), casein and gelatin; while negative for productions of indole and H 2 S. Acetate, citrate, dextrin, fructose, fumarate, glucose, glycerol, malate, mannitol, mannose, pyruvate, rhamnose, succinate, sucrose and trehalose are utilized as sole carbon sources, but galactose, lactate, lactose, xylitol and xylose are not. Acid is not produced from any of sole carbon sources stated above. The strain contains MK-8 and MK-8(H 2 ). Polar lipids include PG, PGP-Me and 5GLs. The genomic DNA G+C content is 56.3%. The type strain is YIM 93972 T (=DSM 46868 T = CGMCC 1.17467 T ), isolated from a soil sample from Salt Lake in Xinjiang Uygur Autonomous Region of China. | Results and discussion
A new haloarchaeal lineage with complex morphological differentiation
During the investigation of actinobacterial diversity from hypersaline environments, an unexpected hyper-halophilic archaeon with a distinct growth pattern was discovered serendipitously from the Qijiaojing Salt Lake in the Xinjiang Uygur Autonomous Region of China. This novel strain, designated YIM 93972, was isolated from a soil sample using the standard dilution-plating technique at 37 °C for four weeks on Modified Gause (MG) agar plates supplemented with 20% NaCl (w/v), which previously has been mainly used to isolate halophilic actinomycetes 11 . The colonies of YIM 93972 comprised yellow, orange, and red branching filaments with white aerial hyphae, underwent complex cellular differentiation (Fig. 1a , Supplementary Fig. 1a ), and produced spores (0.5–0.7 × 1.0–1.2 μm in size) in both solid (Fig. 1b I-III) and liquid media (Fig. 1b IV). As other spore-forming strains 12 , the spores of this new halophilic archaeon were resistant to heat stress up to 70 °C. In contrast, substrate hyphae (SH) were resistant up to 60 °C (Supplementary Fig. 1b ). The higher resistance to heat stress of spores might enable them to survive under extreme saline conditions by their protective and repair capabilities. Optimal growth of YIM 93972 was observed at 40–45 °C, 3.8–4.3 M NaCl and 7.0–7.5 pH (Supplementary Data 1 ). Transmission electron microscopy (TEM) revealed the different stages of spore differentiation, from cell constriction, septa formation, and rod separation to chain formation (Fig. 1b V and VI). Thus, the cellular differentiation of this new haloarchaeon includes the growth of substrate and aerial hyphae and spore formation, which resembles the morphogenetic development of Streptomyces .
Analysis of the 16S rRNA gene indicated that YIM 93972 was closely related to Halocatena pleomorpha SPP-AMP-1 T , with 96.65% identity, and the two species are likely to represent a distinct lineage within the family Halobacteriaceae . Although YIM 93972 displays morphological features resembling those of actinomycetes, it is otherwise a typical archaeon that, in particular, contains glycerol diether moieties (GDEMs) and phosphatidylglycerol phosphate methyl ester (PGP-Me) in membrane lipids and an S-layer cell wall (Fig. 1c, d , Supplementary Fig. 2 , Supplementary Data 1 ). In addition, genome analysis and proteomic data indicate that, like many extreme halophilic archaea, YIM 93972 produces halorhodopsin (Supplementary Fig. 3 ), a 7-transmembrane protein that function as a light-driven ion transporter. However, YIM 93972 lacks the genes encoding enzymes involved in the biosynthesis of bacterioruberin (in particular, lycopene elongase, halo.01227), a C 50 carotenoid that has been identified in several extremely halophilic archaea, the production of which is regulated by halorhodopsin 13 , 14 .
To investigate the ecological distribution of YIM 93972 and its relatives, we collected 15 soil samples from the Aiding Lake, Large South Lake, Dabancheng East Lake, and Uzun Brac Lake of Xinjiang Uygur Autonomous Region and amplified the 16S rRNA genes from environmental DNAs using a pair of PCR primers designed based on the 16S rRNA gene sequence of YIM 93972. In two soil samples from Aiding Salt Lake, 47 sequences of 16S rRNA gene were obtained and shown to form a clade together with YIM 93972 (OTU13 in Supplementary Fig. 4a , sequences in Supplementary Data 18 ). Thus, YIM 93972 and related haloarchaea are widespread in the Salt Lake environment. Additionally, an attempt was made to isolate archaea related to YIM 93972 from the same soil samples from these Salt Lakes using the same method on Modified Gause agar plates. Finally, three strains YIM A00010, YIM A00011, and YIM A00014 from the Aiding Salt Lake as well as two strains YIM A00012 and YIM A00013 from the Uzun Brac Salt Lake were confirmed as novel archaeal strains with morphological differentiation (Fig. 2a ).
The phylogenetic tree of 16S rRNA showed that H. pleomorpha SPP-AMP-1 T isolated recently from a man-made saltpan site in India 15 and Halomarina oriensis JCM 16495 T isolated from a seawater aquarium in Japan 16 were the two known species most closely related to the six new strains with morphological differentiation (Supplementary Fig. 4b ). Cells of H. oriensis JCM 16495 T were irregular coccoid or discoid shapes, but those of H. pleomorpha SPP-AMP-1 T were pleomorphic in addition to rod-shaped. These two strains and the other two pleomorphic strains ( Haloferax volcanii CGMCC 1.2150 T 17 and Haloplanus salinarum JCM 31424 T 18 ) were collected for morphological comparison. On the same ISP 4 medium containing 20% NaCl, only H. pleomorpha SPP-AMP-1 T exhibited differentiated morphology similar to that of YIM 93972 (Fig. 2a , Supplementary Fig. 5 and 6 ). Collectively, these observations demonstrated that a distinct group of morphologically differentiating (hereafter morphogenetic, for brevity) haloarchaea is widespread in various hypersaline environments.
Genomic features of morphogenetic haloarchaea
To gain further insight into the biology of morphogenetic haloarchaea (referring specifically to haloarchaea with hyphae differentiation and spore formation), the complete genomes of YIM 93972 and other five morphogenetic strains were sequenced. The genome of YIM 93972 is composed of a major chromosome (2,676,592 bp, 57.2% G + C content, 2803 predicted genes), a minor chromosome (844,905 bp, 54.7% G + C content, 722 predicted genes), and three plasmids (Supplementary Fig. 7 ; Supplementary Data 2a ). Altogether, the genome of YIM 93972 encompassed 3744 predicted protein-coding genes, 47 tRNAs, and two rRNA operons (the two 16S rRNA genes shared 99.8% identity) located on the major and minor chromosomes. Although all six genomes of the morphogenetic haloarchaeal strains consisted of two chromosomes of different sizes, otherwise, we identified many distinct genomic features (Supplementary Data 2 , Supplementary Data 2b–f ). In particular, the genome sizes of YIM A00010 and YIM A00011 (5.97 Mb and 5.82 Mb, respectively) are substantially larger than those of the other four strains (3.23 Mb on average). Among the six strains, only YIM A00013 had two rRNA operons like YIM 93972, but the genome of the former carried fewer plasmids and encompassed fewer genes than the latter.
We then constructed a set of 14,870 clusters of haloarchaeal orthologous genes (haloCOGs; Supplementary_data_file_ 1 ). Phylogenomic analysis of 268 low-paralogy haloCOGs that are universally conserved in 130 genomes of class Halobacteria (Supplementary Data 3 ) showed that YIM 93972 and its morphogenetic relatives including H. pleomorpha , formed a clade with Halomarina oriensis , within the family Halobacteriaceae (Fig. 2b , Supplementary_data_file_ 2 ). The morphogenetic haloarchaea shared a core of 2,053 strictly conserved orthologous genes, but lacked 336 genes that are common in other Halobacteria (Supplementary Data 3 , Supplementary Data 4a ). For 3208 of the 3744 predicted genes of YIM 93972, an ortholog was identified in at least one non-morphogenetic haloarchaeon. After mapping the patterns of gene presence-absence in haloCOGs onto the species tree, we obtained a maximum likelihood reconstruction of the history of gene gains and losses in class Halobacteria . The ancestor of the morphogenetic haloarchaea was estimated to contain 3903 haloCOGs, of which 1420 were represented in the tree branch that separates the morphogenetic clade from the common ancestor with the sister clade, whereas 339 new genes were gained (Supplementary Data 4b ).
Several gene losses appeared to be relevant for the morphogenesis of these new haloarchaea, including the hyphae formation, and lack of cell motility (Supplementary Data 4c ). These include the loss of CetZ (FtsZ3) which is involved in controlling the rod-like cell shape in non-morphogenetic haloarchaea 19 , the archaellum and another halobacterial type IV pili system, typified by the PilB3 locus of Haloferax volcanii 20 , which is represented in Halobacteria and Methanomicrobia 20 . Rod-shaped cells were not observed in morphogenetic haloarchaea, suggesting a functional link between the loss of cetZ and complex hyphae differentiation. Conversely, in morphogenetic haloarchaea, the halo.02163 family of MreB-like proteins is expanded, whereas the halo.02581 family of MreB-like proteins that is common in other haloarchaea is lost. Given that MreB is the key cell shape determinant in bacteria 21 , these findings further emphasize specific changes in cytoskeleton organization that are likely to be relevant for the pleomorphism of these archaea (Supplementary Fig. 8 ).
Among the gains, there are 61 genes that are strictly conserved in all 7 morphogenetic haloarchaea and absent in other Halobacteria in our genome set. These gained genes included a distinct homolog of Cdc48, an AAA+ ATPase (halo.06695, ORF_0238) . The proteins of this superfamily play key roles in a variety of cellular processes, including cell-cycle regulation, DNA replication, cell division, the ubiquitin-proteasome-system and others 22 , 23 and might contribute to the morphological differentiation. Another notable case is a M6 family metalloprotease (halo.06530 and halo.06177, ORF_0582 and ORF_1236 ), a distant homolog of InhA1, a major component of the exosporium in Bacillus spores 24 . In most archaea, the S-layer is the exclusive cell envelope component that plays a crucial role in surface recognition and cell shape maintenance 25 , 26 . The morphogenetic haloarchaea shared a distinct of S-layer-forming glycoprotein (halo.06692, ORF_1400 ), which might be related to the unique morphology. Furthermore, proteomic data showed that YIM 93972 produced two varieties of S-layer proteins encoded by ORF_1400 and ORF_1704 (Supplementary Fig. 9a–d ). For most of the remaining genes gained and, in some cases, duplicated in the morphogenetic haloarchaea, there was no functional prediction or only a generic prediction, although notably many of these genes encoded predicted transmembrane or secreted proteins (Supplementary Data 4b, d ). Several of these proteins contain aspartate-rich repeats that could coordinate Ca 2+ ions, which are known to activate spore germination in sporulating actinobacteria 27 .
Sporulating bacteria employ a distinct set of small molecules for energy storage and spore protection. We identified genes present in all 7 genomes of morphogenetic haloarchaea that are involved in trehalose biosynthesis and utilization 28 , dipicolinic acid biosynthesis 29 and poly(R)-hydroxyalkanoic acid biosynthesis 30 (Supplementary Data 3 ). Comparative omics data indicated that genes involved in the biosynthesis of small molecules implicated in sporulation, namely, poly(R)-hydroxyalkanoic acid synthesis ( ORF_2608 ) and 4-hydroxy-tetrahydrodipicolinate ( ORF_1392 ), were upregulated 3.6 and 7.8 folds at the mRNA, and 1.3 and 1.6-fold at the protein level, respectively. This observation suggests that poly(R)-hydroxyalkanoic acid could be a key storage molecule in the spores. Polyhydroxyalkanoates also accumulate in hyphae and spores in some actinobacterial species 30 . However, all these genes are also widespread in non-morphogenetic Halobacteria . Another notable gene module present in all genomes of morphogenetic haloarchaea consists of a MoxR family ATPase and a von Willebrand factor type A (vWA) domain containing protein, which is strongly associated with bacterial multicellularity 31 .
Random mutagenesis and comparative multi-omics uncover several morphogenetic genes
Seeking to identify the genetic basis of cellular differentiation in morphogenetic haloarchaea, we performed nitroso-guanidine (NTG) chemical mutagenesis on spores of YIM 93972. After four generations of serial cultivation from the original 1110 mutant colonies, we obtained five transitional (weak aerial hyphae) and three bald (no aerial hyphae) mutants (Supplementary Fig. 10a, b ). Analysis of the genome sequences of these 8 mutants identified two mutations in protein-coding gene sequences and one mutation upstream of a gene in all three bald mutants; and one mutation upstream of a gene in three of the five transitional mutants (Supplementary Fig. 10c , Supplementary Data 5 ). In the three bald mutants, the two intragenic mutations were the non-synonymous G1067A substitution in ORF_0238 (ATPase of the AAA+ class, Cdc48 family), and synonymous C309T substitution in ORF_0964 (RNA methyltransferase, SPOUT superfamily), whereas the third mutation was the T136G substitution in the non-coding region upstream of ORF_2797 (ParB-like DNA binding protein). In five transitional mutants, the mutation shared by three mutants (T2, T4, and T5) was C69T in the intergenic region upstream of ORF_1717 (Uncharacterized protein).
Given the lack of an experimental genetic system for YIM 93972, we performed whole transcriptomic (Fig. 3a , Supplementary Data 6 – 9 ) and quantitative proteomic analyses (Fig. 4a , Supplementary Fig. 11 , Supplementary Data 10 – 12 ) for two transitional and three bald mutants to further characterize the potential association of the identified mutations with the mutant phenotypes. We identified the G1067A substitution in ORF_0238 in the transcriptome of all transitional mutants (except for two biological replicates of mutant T2) and bald mutants but not the wild-type transcriptome (Supplementary Fig. 10d , Supplementary Data 7 ). Furthermore, ORF_0238 was downregulated 3.9 fold ( p < 0.001) at the protein level in the bald mutants (Supplementary Data 11 - 12 ). Phylogenetic analysis of the Cdc48 family indicated that ORF_0238 belonged to a distinct branch specific for morphogenetic haloarchaea (Supplementary Fig. 12 ). Taken together, these observations suggest that ORF_0238 might be functionally important for sporulation and morphological differentiation in YIM 93972.
To further characterize the genetic basis of hyphal differentiation, we compared the transcriptome and quantitative proteome of the wild type from both aerial hyphae and substrate hyphae with those of morphogenetic mutants from substrate hyphae only (given the absence of aerial hyphae) (Figs. 3 a, b and 4a–c , Supplementary Fig. 11a–e ). Quantitative analysis revealed 107 genes (Fig. 3c–d , Supplementary Data 9 , Supplementary Data 19 ) and 118 proteins (Fig. 4d, e , Supplementary Data 12 , Supplementary Data 20 ) that were significantly upregulated in the aerial hyphae of wild type compared to the substrate hyphae of both wild and mutant types. The differentially expressed genes and proteins play major roles in translation, transcription, metabolism, and ion transport (Figs. 3 e and 4f , Supplementary Data 13 ). In particular, ATP-binding cassette (ABC) peptide transporter operon ( ORF_2669-ORF_2673 on chromosome I) was significantly upregulated in wild-type aerial hyphae (Fig. 5a, b , Supplementary Data 14 ). In this operon, the peptide-binding protein ( ORF_2669 ) was significantly upregulated 6.91 and 1.6 fold at the mRNA and protein levels, respectively (Fig. 5c , Supplementary Data 21 ).
It has been shown that Streptomyces coelicolor employs a distinct ATP-binding cassette (ABC) transporter to import oligopeptides that serve as signals for aerial mycelium formation 32 . To test whether YIM 93972 similarly used ABC transporters for importing peptides into the mycelia, we tested the resistance of transitional and bald mutants to bialaphos, an antibiotic that enters cells via oligopeptide permeases 33 . We found that only the wild type was sensitive to bialaphos (Fig. 5d ) whereas all mutants were resistant to this antibiotic up to concentrations less than 1.0 μg/mL (Supplementary Fig. 13 ). Despite the low protein sequence similarity with the S. coelicolor oligopeptide permease (Fig. 6a), we hypothesized that the ORF_2669-ORF_2673 operon of YIM 93972 is involved in the aerial mycelium and spore formation. Deletion of the oligopeptide-transport operon ( bldKA-KE ) caused a bald phenotype (only present substrate mycelium) in S. coelicolor 34 . We introduced the entire ORF _ 2669 - ORF _ 2673 operon of YIM 93972 into the S. coelicolor M145 ΔbldKA - KE /pIB139 mutant (Supplementary Data 15 , 16 ) and found that aerial mycelium formation and sporulation were restored (Fig. 6b ). Thus, the ORF _ 2669 - ORF _ 2673 operon of YIM 93972 might be involved in aerial mycelium formation by importing signaling oligopeptides, and could be functionally equivalent to the bldKA - KE operon of S. coelicolor .
In Actinobacteria, Cyanobacteria and the sporulating Firmicutes, multiple transcriptional regulators (TRs) are involved in complex cell differentiation 35 – 38 . In our transcriptomic and proteomic data, 47 TRs were found to be significantly up-or down-regulated in aerial hyphae comparted to substrate hyphae (Supplementary Data 17 ). These included four TRs regulating phosphate uptake ( ORF_0890 , ORF_0891 , and ORF_1046 ) as well as ORF_1998 , the cell fate regulator YlbF (YheA/YmcA/DUF963 family), which were all significantly upregulated. The ortholog of ORF_1998 is involved in competence development and sporulation in Bacillus subtilis 39 . Another upregulated gene, ORF_ 1932 , encodes the global TR BolA, which is a key player in biofilm development 39 – 41 . Furthermore, the AbrB family TRs encoded in the same operon with genes involved in poly(R)-hydroxyalkanoic acid biosynthesis, the energy storage molecule in spores, was also upregulated. These findings are compatible with the major roles of distinct TRs in the cellular differentiation of morphogenetic archaea.
In summary, we report the biological, genetic and biochemical characteristics of a haloarchaeon that displays complex cellular differentiation. Although the mechanism of this differentiation remains to be elucidated, our findings implicate several genes, in particular, a distinct Cdc48-like ATPase that was mutated in all non-differentiating mutants. The observation that the morphogenetic haloarchaea are closely related to each other and, in the phylogenetic tree of Halobacteria , form a distinct clade within the family Halobacteriaceae suggests a relatively recent origin of this complex phenotype. It remains to be shown whether complex cellular differentiation, which evolved independently in several groups of bacteria, also independently emerged in other archaea.
Taxonomic description
As mentioned above, phylogenetic analysis of 16S rRNA and concatenated conserved genes indicated that strain YIM 93972 formed a clade with the closest related species Halocatena pleomorpha within family Halobacteriaceae . The average nucleotide identity based on BLAST (ANIb) value between YIM 93972 and H. pleomorpha SPP-AMP-1 T was 70.59%, which was far below the cut-off values <75 % for genus demarcation in the class Halobacteria 42 . Further, the low AAI (68.4%) between YIM 93972 and H. pleomorpha SPP-AMP-1 T confirms the novelty of YIM 93972 at the genus level 42 . The substrate mycelium of YIM 93972 formed sporangia, and the aerial mycelium formed spore chains at maturity and the spores were cylindrical with wrinkled surfaces. The respiratory quinones of strain YIM 93972 are MK-8 and MK-8(H 2 ) (Supplementary Fig. 14 ). By virtue of having a different isomer for MK-8 (H 2 ), the respiratory quinones of YIM 93972 differ from those of H. pleomorpha SPP-AMP-1 T , which contains only menaquinone MK-8. The major polar lipids of the strain were chromatographically identified as phosphatidylglycerol, phosphatidylglycerolphosphate methyl ester and five unidentified glycolipids, while H. pleomorpha SPP-AMP-1 T has phosphatidylglycerol, phosphatidylglycerolphosphate methyl ester, glycosyl mannosyl glucosyl diether and sulphated glycosyl mannosyl glucosyl diether. Besides the phylogenetic and morphological differences, the isolate is differentiated from the closely related genera by its chemotaxonomic markers (Supplementary Data 1 ). Based on the above results, YIM 93972 T merits representation as a new species in a new genus within the family Halobacteriaceae , for which the name Actinoarchaeum halophilum gen. nov., sp. nov. is herewith proposed.
Description of Actinoarchaeum gen. nov
Actinoarchaeum (Ac.tino.ar.chae’um. Gr. n. aktis - inos , a ray; N.L. neut. n. archaeum (from Gr. adj. archaios - ê - on , ancient), archaeon; N.L. neut. n. Actinoarchaeum ray archaeon, referring to the radial arrangement of filaments).
Aerobic, extremely halophilic, Streptomyces -like colonies that produce brown substrate mycelia forms terminal sporangia with white aerial mycelia and white spore on modified ISP 4 medium. The major polar lipids are phosphatidylglycerol, phosphatidylglycerol phosphate methyl ester and five unidentified glycolipids. The predominant menaquinones are MK-8 and MK-8(H 2 ). The G + C content of the genomic DNA is about 56.3 mol%. The type species is Actinoarchaeum halophilum . Recommended three-letter abbreviation: Aah .
Description of Actinoarchaeum halophilum sp. nov
Actinoarchaeum halophilum (ha.lo’phi.lum. Gr. n. hals halos, salt; N.L. adj. philus - a - um , from Gr. adj. philos - ê -on, friend, loving; N.L. neut. adj. halophilum , salt-loving).
Morphological, chemotaxonomic, and general characteristics are as given above for the genus. Cells require 2.1–6.0 M NaCl, pH 6.0–9.0, 25–50 °C and Mg 2+ (0.01–0.7 M) for growth. Optimal growth occurs at 3.8–4.2 M NaCl, pH 7.0–7.5, 40–45 °C. Oxidase-weakly positive and catalase-negative. Positive for nitrate reduction, hydrolysis of Tweens (20, 40, 60, 80), casein and gelatin; while negative for productions of indole and H 2 S. Acetate, citrate, dextrin, fructose, fumarate, glucose, glycerol, malate, mannitol, mannose, pyruvate, rhamnose, succinate, sucrose and trehalose are utilized as sole carbon sources, but galactose, lactate, lactose, xylitol and xylose are not. Acid is not produced from any of sole carbon sources stated above. The strain contains MK-8 and MK-8(H 2 ). Polar lipids include PG, PGP-Me and 5GLs. The genomic DNA G+C content is 56.3%. The type strain is YIM 93972 T (=DSM 46868 T = CGMCC 1.17467 T ), isolated from a soil sample from Salt Lake in Xinjiang Uygur Autonomous Region of China. | Several groups of bacteria have complex life cycles involving cellular differentiation and multicellular structures. For example, actinobacteria of the genus Streptomyces form multicellular vegetative hyphae, aerial hyphae, and spores. However, similar life cycles have not yet been described for archaea. Here, we show that several haloarchaea of the family Halobacteriaceae display a life cycle resembling that of Streptomyces bacteria. Strain YIM 93972 (isolated from a salt marsh) undergoes cellular differentiation into mycelia and spores. Other closely related strains are also able to form mycelia, and comparative genomic analyses point to gene signatures (apparent gain or loss of certain genes) that are shared by members of this clade within the Halobacteriaceae . Genomic, transcriptomic and proteomic analyses of non-differentiating mutants suggest that a Cdc48-family ATPase might be involved in cellular differentiation in strain YIM 93972. Additionally, a gene encoding a putative oligopeptide transporter from YIM 93972 can restore the ability to form hyphae in a Streptomyces coelicolor mutant that carries a deletion in a homologous gene cluster ( bldKA-bldKE ), suggesting functional equivalence. We propose strain YIM 93972 as representative of a new species in a new genus within the family Halobacteriaceae , for which the name Actinoarchaeum halophilum gen. nov., sp. nov. is herewith proposed. Our demonstration of a complex life cycle in a group of haloarchaea adds a new dimension to our understanding of the biological diversity and environmental adaptation of archaea.
Bacteria of the genus Streptomyces have complex life cycles involving cellular differentiation and multicellular structures that have never been observed in archaea. Here, the authors show that several halophilic archaea display a life cycle resembling that of Streptomyces bacteria, undergoing cellular differentiation into mycelia and spores.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41467-023-37389-w.
Acknowledgements
We are indebted to Dr. Wen-Jun Li for support in the early stage of this project. The authors thank Dr. Xiu-Zhu Dong, Yong-Jun Liu, Lei Song and Man Cai for strains, reagents, and discussion. We thank Dr. Lingyan Li for discussion of S-layer protein extraction. We thank Professor Aharon Oren for the etymology of new generic name and of new epithet, and Professors Leonard Krall, Felix Cheung and Xu-Na Wu for critical reading and editing. We also thank Shu-Jia Wu, Jin-Shuai Sun, Wen-Hui Wu, Jie Ma and Xue Wang for experiments and data processing. P.X., G.P.Z., S.K.T., X.Y.Z., Y.Z., B.B.L. and F.C.H. are supported by the Chinese National Basic Research Programs (2022YFA1304600 and 2020YFE0202200), the Innovation Foundation of Medicine (2022F12010, 20SWAQX34 and AWS17J008), the National Natural Science Foundation of China (32141003, 31901037, 31870824, 91839302, 32071431, 31760003, 32060003, 32070668, 92151001 and 31800001), the Beijing-Tianjin-Hebei Basic Research Cooperation Project (J200001), CAMS Innovation Fund for Medical Sciences (2019-I2M-5-017, 2019-12M-5-063 and 2022-I2M-C&T-B-082), the Foundation of State Key Lab of Proteomics (SKLP-C202002, 2021-NCPSB-001, SKLP-K201704, & SKLP-K201901) and Major Science and Technology Projects of Yunnan Province (202002AA100007). K.S.M., Y.I.W. and E.V.K. are supported by intramural funds of the US Department of Health and Human Services (National Institutes of Health, National Library of Medicine).
Author contributions
P.X., S.K.T., E.V.K and G.P.Z. designed experiments. K.L. and Y.W. collected the environmental samples. S.K.T. and B.B.L. performed the strain isolation, polyphasic taxonomic analysis, culture of YIM 93972 and all the other 6 morphological differentiation haloarchaeon strains, and SSU rRNA verification with the help of Y.R.Z., E.Y.L., Y.Z.F., M.X.X., Z.Q.L., X.Z., R.L., M.Y., L.L.Y., T.W.G., H.L.C., Z.K.Z. and T.S.T. M.X.X. and Z.Q.L. performed heat stress experiment under the direction of T.S.T., P.X., Y.Z. and S.K.T. X.M.C performed the mutagenesis experiments of YIM 93972. H.J.Z. performed the genomic sequencing of the wild and mutated strains. X.Y.Z., K.S.M. and Y.I.W. performed the genome annotation and phylogenomic analysis. Y.Z. performed all of the proteomic experiments and data analysis under the help of S.H.J., T.Z., P.R.C., J.H.S., C.C., L.C. and H.Y.G. X.Y.Z. and Y.Z. performed transcriptomic analysis, and arranged all figures and tables. P.X., Z.P.Z., Y.Z. and K.S.M. performed transcriptional regulator analysis. Z.P.Z. and S.H.J. helped data analysis and uploading. Y.H.L. and G.S.Z. constructed all the recombinant plasmids and gene deletion strains for gene complementation experiments. F.C.H. helped article frame arrangement about six cellular differentiational halophilic archaea. P.X., Y.Z., G.S.Z., Y.H.L. and S.K.T. designed and performed spore-genetic gene cluster verification with the help of G.P.Z., E.V.K. and K.S.M. X.Y.Z., Y.Z., K.S.M., P.X., E.V.K. and G.P.Z. wrote the manuscript with the help of all authors.
Peer review
Peer review information
Nature Communications thanks William Whitman and the other, anonymous, reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Data availability
The genome and transcriptome data generated in this study have been deposited in GenBank under accession codes CP071306-CP071310 and GSE168067, respectively. The raw reads of amplicon sequencing are deposited in GenBank, under accession number PRJNA703326. All proteomic MS data sets can be obtained from iProX 74 database with identifier IPX0002724000, or ProteomeXchange with identifier PXD023481. Raw reads of genome for strains YIM_A00010, YIM_A00011, YIM_A00012, YIM_A00013, and YIM_A00014 were deposited in GenBank under accession codes JAKCFJ000000000, JAKCFK000000000, JAKCFL000000000, JAKCFM000000000, and JAKCFN000000000, respectively. The complete Halobacterial COG data archive (supplementary_data_file_ 1.tgz ) and the phylogenetic trees and alignments archive (supplementary_data_file_ 2.tgz ) can be downloaded from https://ftp.ncbi.nih.gov/pub/wolf/_suppl/halo22/ .
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:34:59 | Nat Commun. 2023 Apr 1; 14:1827 | oa_package/58/4d/PMC10067837.tar.gz |
|
PMC10091483 | 37039501 | Background
Description of the condition
The ductus arteriosus is a blood vessel that connects the main pulmonary artery to the proximal descending aorta. It plays an important role in maintaining foetal circulation by allowing a significant proportion of right ventricular output to bypass the pulmonary circulation ( Gournay 2011 ). Following birth, with establishment of respiration and separation of low‐resistance placenta, closure of the ductus arteriosus begins. This closure is triggered by physiological mechanisms, such as increased oxygen tension and decreased circulating prostaglandin (PGE2) and prostacyclin (PGI2 ( Hundscheid 2019 )). Functional closure of the ductus arteriosus occurs over the next 24 to 72 hours in term infants ( Benitz 2016 ). In preterm infants, closure is often delayed, leading to the ductus arteriosus remaining patent beyond the first few days of life. In healthy preterm neonates, born at > 30 weeks' gestation, the patent ductus arteriosus (PDA) closes by day four in 90%, and by day seven in 98% of infants ( Clyman 2012 ). In extremely preterm infants, born at < 24 weeks' gestation, spontaneous PDA closure rates are only about 8% by day four and 13% by day seven ( Clyman 2012 ).
Therefore, a PDA often persists beyond the first few days of life in a preterm neonate, but may remain asymptomatic, without inducing any adverse haemodynamic consequences in the neonate. However, with progressive decline in pulmonary vascular resistance, blood flow from the aorta into the pulmonary arteries is increased through the PDA. Consequently, the proportion of aortic blood flow that is diverted into the pulmonary circulation is correspondingly increased ( Benitz 2016 ). This 'ductal steal' may result in excessive blood flow through the lungs, predisposing the development of pulmonary congestion, pulmonary oedema, worsening respiratory failure, and eventually, chronic lung disease (CLD ( Benitz 2016 )). At the same time, diversion of blood flow away from the systemic circulation may lead to systemic hypoperfusion, resulting in compromised perfusion to the bowel, kidney, and brain. When a PDA is associated with clinical and echocardiographic signs of pulmonary hyperperfusion and systemic hypoperfusion, this is labelled a symptomatic PDA, or a haemodynamically significant PDA. A persistently symptomatic PDA may be associated with numerous adverse outcomes, including higher rates of death ( Dice 2007 ), bronchopulmonary dysplasia (BPD ( Brown 1979 )), necrotising enterocolitis (NEC ( Dollberg 2005 )), impaired renal function ( Benitz 2016 ), intraventricular haemorrhage (IVH ( Ballabh 2010 )), periventricular leukomalacia (PVL ( Chung 2005 )), and cerebral palsy ( Drougia 2007 ). However, the causal link between these associations has not been demonstrated ( Benitz 2010 ).
Description of the interventions
A PDA can be closed through medical or surgical interventions. Pharmacotherapeutic agents include non‐steroidal anti‐inflammatory drugs (NSAIDs), such as ibuprofen or indomethacin, and acetaminophen, which is a derivative of acetanilide with anti‐inflammatory properties. Surgical interventions include surgical ligation and transcatheter occlusion. In this overview, we will focus on both pharmacotherapeutic and surgical interventions for prevention and treatment of preterm infants with PDA.
NSAIDs act by inhibiting the cyclo‐oxygenase (COX) enzyme, thereby leading to down regulation of PGE2, a potent relaxant of the PDA ( Mahony 1982 ). However, use of indomethacin in preterm infants has been associated with transient or permanent derangement of renal function ( Seyberth 1983 ), NEC ( Coombs 1990 ), gastrointestinal haemorrhage or perforation ( Wolf 1989 ), alteration of platelet function ( Friedman 1976 ), and impairment of cerebral blood flow/cerebral blood flow velocity ( Ohlsson 1993 ). Therefore, variations in indomethacin therapy have been attempted to mitigate the said adverse effects while maximising therapeutic benefit. These include using continuous infusion of indomethacin rather than intermittent bolus doses, which may reduce its adverse effects on cerebral oxygenation ( Hammerman 1995 ), and use of a prolonged course of indomethacin, which may provide increased therapeutic benefit compared to a short course of indomethacin ( Rennie 1991 ).
Ibuprofen exerts its action through inhibition of the COX enzyme, but appears to be associated with lower risk of NEC and transient renal insufficiency, compared to indomethacin ( Ohlsson 2020b ). However, variation in dosage and route of administration of ibuprofen may impact medication effectiveness. It has been demonstrated that to achieve optimal concentrations of ibuprofen for successful PDA closure, irrespective of gestational age, progressively higher doses are required with increasing postnatal age ( Hirt 2008 ). Similar pharmacokinetic studies have shown that peak serum concentrations following oral ibuprofen therapy are significantly higher than previously demonstrated intravenous levels, suggesting a potential for greater responsiveness to oral ibuprofen compared to the intravenous formulation ( Barzilay 2012 ).
Acetaminophen is postulated to exert its action through inhibition of the peroxidase enzyme, thereby leading to down regulation of PGE2 production ( Gillam‐Krakauer 2018 ; Grèen 1989 ). No short‐term adverse effects have been noted with acetaminophen. However, data on the safety and long‐term neurodevelopmental effects of acetaminophen in preterm infants are limited ( Ohlsson 2020 ; Van den Anker 2018 ).
With increasing emphasis on conservative management, surgical PDA ligation is primarily reserved for infants with persistent symptomatic PDA following the failure of medical management. Surgical PDA ligation is associated with reduced mortality, but surviving infants were found to be at increased risk of neurodevelopmental impairment, which could be due to lack of studies addressing survival bias and confounding by indication ( Weisz 2014 ).
How the intervention might work
Prevention and treatment of a PDA via the most effective modality may help to avoid clinical complications associated with persistent PDA, such as mortality, CLD, NEC, and renal failure. Prevention of PDA includes prophylactic medical or surgical closure of the PDA within the first 24 hours after birth, before the development of clinical symptoms ( Benitz 2016 ). Although one of the earliest randomised trials on PDA management used prophylactic surgical PDA ligation, this is no longer a preferred modality in the current clinical context, given its associated risks and the availability of pharmacotherapeutic options ( Cassady 1989 ). Hence, prophylactic management of the PDA essentially involves the use of pharmacotherapeutic agents within the first 24 hours of life, without knowledge of PDA status. Prophylactic use of intravenous indomethacin has been shown to reduce the incidence of symptomatic PDA, surgical PDA ligation, and the incidence of severe intraventricular haemorrhage, but has no effect on mortality, nor on a composite of death or severe neurodevelopmental disability, compared to placebo or no treatment ( Fowlie 2010 ). Prophylactic ibuprofen, compared to placebo or no intervention, has also been shown to reduce the need for rescue treatment with COX inhibitors, and for surgical PDA closure ( Ohlsson 2020a ). However, both prophylactic indomethacin and prophylactic ibuprofen have been shown to be associated with increased risk of oliguria ( Fowlie 2010 ; Ohlsson 2020a ). Prophylactic ibuprofen is further associated with increased risk of gastrointestinal haemorrhage ( Ohlsson 2020a ). Therefore, interest in expectant management of the PDA in preterm infants is growing, and the safety of this approach remains to be established through large randomised controlled trials ( Hundscheid 2018 ).
On the other hand, treatment of PDA entails pharmacotherapeutic or surgical closure of a PDA, the diagnosis of which was based on characteristic clinical symptoms, echocardiographic findings, or both. NSAIDs and acetaminophen have been shown to be more effective in closing a symptomatic PDA compared to placebo ( Mitra 2018 ). Ibuprofen appears to be as effective as indomethacin in closing a symptomatic PDA, while reducing the risk of NEC and transient renal insufficiency ( Ohlsson 2020b ). Moderate‐quality evidence shows that acetaminophen is as effective as ibuprofen, and low‐quality evidence suggests that acetaminophen is as effective as indomethacin in closing a symptomatic PDA ( Ohlsson 2020 ). Data are inconclusive regarding the efficacy and safety of surgery as the initial modality of treatment for a symptomatic PDA in a preterm infant compared to pharmacotherapeutic management ( Malviya 2013 ).
Why it is important to do this overview
Management of the PDA is one of the most controversial topics in neonatal medicine. Prophylactic treatment with indomethacin reduces the need for surgical PDA ligation and severe periventricular and intraventricular haemorrhage, but does not improve the rate of survival without neurosensory impairment at 18 months ( Schmidt 2001 ). There are also concerns about the increased incidence of spontaneous gastrointestinal perforation with prophylactic indomethacin ( Stavel 2017 ). On the other hand, prophylactic use of ibuprofen for PDA in preterm infants has been associated with severe hypoxaemia, pulmonary hypertension, and gastrointestinal haemorrhage ( Gournay 2002 ). Therefore, debate on whether NSAIDs should be routinely used to prevent PDA in preterm infants is ongoing. To date, four Cochrane Neonatal Reviews have examined prophylactic medical or surgical management of PDA in preterm infants ( Fowlie 2010 ; Mosalli 2008 ; Ohlsson 2020 ; Ohlsson 2020a ). There is also debate about whether treatment of an asymptomatic PDA before the development of a significant left‐to‐right shunt improves clinical outcomes. One Cochrane Neonatal Review explored the question of treatment for asymptomatic PDA ( Cooke 2003 ). Similarly, when it comes to treatment for a symptomatic PDA, the availability of multiple management strategies contributes to the dilemma among clinicians. In a recent systematic review and network meta‐analysis, 15 different pharmacotherapeutic options were identified that have been explored in randomised clinical trials for the management of symptomatic PDA ( Mitra 2018 ). The Cochrane Reviews published so far on this topic tackled the problem from a narrow perspective, as all of them compared only two out of several available interventions against each other. Some of these reviews lacked an assessment of the quality of the evidence, using GRADE, and reviews showed variation in the definitions of symptomatic PDA, interventions, and outcomes described. Therefore, an overview of available Cochrane Neonatal Reviews was justified, as it helped to summarise the evidence generated so far on the management strategies available for PDA in preterm infants with respect to the most important outcomes, including the quality of the evidence, and also highlighted important gaps in knowledge that may guide future research on PDA management.
Is an overview the right approach?
We followed the Editorial Decision Tree proposed by the Cochrane Comparing Multiple Intervention Methods Group to establish whether our review would better fit an overview format or an intervention review format. We decided that for the purposes of this review, to ( Shepherd 2018 ):
review only systematic reviews published in the Cochrane Database of Systematic Reviews, instead of individual trials; not compare multiple interventions with the intention of drawing inferences about the comparative effectiveness of these interventions, as we cannot draw conclusions on the transitivity assumption from systematic reviews only; and present a map of evidence from systematic reviews, but with no attempt to rank the interventions.
On the basis of these points, the Editorial Decision Tree recommended that an overview was the appropriate format for this review. | Methods
Criteria for considering reviews for inclusion
Types of studies
In this overview of systematic reviews, we included only published Cochrane Systematic Reviews on the management of patent ductus arteriosus (PDA) in a preterm infant.
Types of participants
For objective 1 (prevention of PDA)
Preterm (gestational age < 37 weeks at birth) or low‐birth‐weight infants (< 2500 g).
For objective 2 (management of PDA)
Preterm (gestational age < 37 weeks at birth) or low‐birth‐weight infants (< 2500 g) with PDA diagnosed clinically, or via echocardiography, or both, in the neonatal period (< 28 days).
We defined an asymptomatic PDA clinically by the presence of a precordial murmur or echocardiographically (presence of left‐to‐right PDA shunt), without clinical signs of a moderate‐ to high‐volume left‐to‐right shunt (hyperdynamic precordial impulse, tachycardia, bounding pulses, widened pulse pressure, worsening respiratory status, hypotension, or cardiac failure).
We defined a symptomatic PDA clinically by the presence of a precordial murmur, along with one or more of the following signs: hyperdynamic precordial impulse, tachycardia, bounding pulses, widened pulse pressure, worsening respiratory status, hypotension, or cardiac failure. We defined a symptomatic PDA echocardiographically by a moderate to large transductal diameter, with or without evidence of pulmonary over‐circulation, with or without evidence of systemic hypoperfusion. We also defined a symptomatic PDA as a combination of left‐to‐right PDA shunt on echocardiography, along with clinical signs of a high‐volume left‐to‐right shunt (hyperdynamic precordial impulse, tachycardia, bounding pulses, widened pulse pressure, worsening respiratory status, hypotension, or cardiac failure).
Types of interventions
In this overview, we specifically included reviews of therapies primarily intended to prevent or manage a PDA.
For objective 1 (prevention of PDA)
Interventions included prophylactic (not guided by knowledge of PDA status) pharmacological or surgical treatment of PDA within 24 hours of birth. Pharmacological treatments included indomethacin, ibuprofen, and acetaminophen compared against each other, or placebo, or no treatment. There were no restrictions on dose, route, or duration of treatment. Surgical interventions included surgical or transcatheter PDA closure compared against medical treatment, or placebo, or no treatment.
For objective 2 (management of PDA)
Interventions included pharmacological and surgical treatments for an asymptomatic or a symptomatic PDA. Pharmacological treatments included indomethacin, ibuprofen, and acetaminophen compared against each other, or placebo, or no treatment. There were no restrictions on dose, route, or duration of treatment. Surgical interventions included surgical or transcatheter PDA closure compared against medical treatment, or placebo, or no treatment.
Types of outcome measures
For objective 1 (prevention of PDA)
Primary outcomes
Severe intraventricular haemorrhage (IVH; grade III/IV ( Papile 1978 )) Death or moderate/severe neurodevelopmental disability (assessed by a standardised and validated assessment tool, a child developmental specialist, or both) at any age reported (outcome data grouped at 12, 18, and 24 months, if available). Individual components of a neurodevelopmental outcome, defined in individual reviews, were reported if data were available.
Secondary outcomes
PDA‐related outcomes
Symptomatic PDA confirmed on echocardiogram Proportion of infants receiving open‐label medical treatment (cyclo‐oxygenase inhibitor or paracetamol/acetaminophen dosing, or both) Proportion of infants requiring surgical ligation or transcatheter occlusion
Other outcomes
Chronic lung disease (CLD), defined as oxygen requirement at 36 weeks’ postmenstrual age ( Ehrenkranz 2005 ) Intraventricular haemorrhage (IVH; grade I to IV ( Papile 1978 )) Pulmonary haemorrhage, defined as blood‐stained respiratory secretions with a significant change in respiratory requirements and chest X‐ray (CXR) changes in the presence of echocardiographic evidence of significant left‐to‐right ductal shunting ( Kluckow 2014 ) Retinopathy of prematurity (ROP), defined according to the international classification of ROP ( ICCROP 2005 ) Duration of hospitalisation, defined as total length of hospitalisation from birth to discharge home or mortality, in days Moderate/severe neurodevelopmental disability, assessed by a standardised and validated assessment tool, a child developmental specialist, or both, at any age reported (outcome data grouped at 12, 18, and 24 months, if available). Individual components of a neurodevelopmental outcome were reported if data were available. All‐cause mortality any time before neonatal intensive care unit (NICU) discharge
Safety outcomes
Necrotising enterocolitis (NEC; stage 2 or greater ( Bell 1978 )) Gastrointestinal perforation, defined by the presence of free air in peritoneal cavity on an abdominal X‐ray ( Ohlsson 2020a ) Gastrointestinal bleeding within seven days of the first dose of pharmacotherapy Oliguria, defined as less than 1 mL/kg/hour Serum/plasma levels of creatinine (μmol/L) after treatment Increase in serum/plasma levels of creatinine (μmol/L) after treatment Serum/plasma levels of bilirubin (μmol/L) after treatment Increase in serum/plasma levels of bilirubin (μmol/L) after treatment
For objective 2 (management of PDA)
Primary outcomes
Failure of PDA closure after completion of allocated treatment, defined as persistence of symptomatic PDA confirmed clinically, or by echocardiography, or both Death or moderate/severe neurodevelopmental disability, assessed by a standardised and validated assessment tool, a child developmental specialist, or both, at any age reported (outcome data grouped at 12, 18, and 24 months, if available). Individual components of a neurodevelopmental outcome, as defined in individual reviews, will be reported if data are available.
Secondary outcomes
PDA‐related outcomes
Proportion of infants receiving open‐label medical treatment (repeated COX inhibitor or paracetamol/acetaminophen dosing, or both). Proportion of infants requiring surgical ligation or transcatheter occlusion. Proportion of infants receiving open‐label medical or surgical treatment in the placebo/no treatment group.
Other outcomes
CLD, defined as oxygen requirement at 36 weeks’ postmenstrual age ( Ehrenkranz 2005 ) Pulmonary haemorrhage, defined as blood‐stained respiratory secretions with a significant change in respiratory requirements and chest X‐ray (CXR) changes in the presence of echocardiographic evidence of significant left‐to‐right ductal shunting ( Kluckow 2014 ) Severe intraventricular haemorrhage (IVH; grade III/IV; for studies of asymptomatic treatment ( Papile 1978 )) Retinopathy of prematurity (ROP; according to the international classification of ROP ( ICCROP 2005 )) Duration of hospitalisation, defined as total length of hospitalisation from birth to discharge home or mortality, in days Moderate/severe neurodevelopmental disability, assessed by a standardised and validated assessment tool, a child developmental specialist, or both, at any age reported (outcome data grouped at 12, 18, and 24 months, if available). Individual components of a neurodevelopmental outcome will be reported if data are available. All‐cause mortality any time before NICU discharge
Safety outcomes
Necrotising enterocolitis (NEC; stage 2 or greater ( Bell 1978 )) Gastrointestinal perforation, defined by the presence of free air in peritoneal cavity on an abdominal X‐ray ( Ohlsson 2020a ) Gastrointestinal bleeding within seven days of the first dose of pharmacotherapy Oliguria, defined as less than 1 mL/kg/hour Serum/plasma levels of creatinine (μmol/L) after treatment Increase in serum/plasma levels of creatinine (μmol/L) after treatment Serum/plasma levels of bilirubin (μmol/L) after treatment Increase in serum/plasma levels of bilirubin (μmol/L) after treatment
Search methods for identification of reviews
We searched the Cochrane Database of Systematic Reviews, using the term ‘patent ductus arteriosus’, on 20 October 2022. We used the search term to search ‘all text’, not limited to ‘title, abstract, or keywords’. We did not apply any language or date restrictions. We did not search any other databases.
Data collection and analysis
We used the standard methods of Cochrane Neonatal.
Selection of reviews
Two overview authors (SM and DW) independently assessed for inclusion, all potential systematic reviews identified by the search. Disagreements were resolved through discussion, or if required, a third member of the overview team was consulted (PS).
Data extraction and management
Two overview authors (SM and DW) independently extracted data from the reviews, using a standardised form developed in Microsoft Excel. Discrepancies were resolved through discussion, or if needed, through consultation with a third overview author (PS). In the event information regarding review outcomes was unclear or missing, individual studies were accessed for further details.
We extracted data on the following.
Review characteristics. Review title and authors Date that the review was last assessed as up‐to‐date Number of included trials and numbers of participants in the trials and their characteristics Risk of bias of the included trials, as reported by the review authors; see Quality of studies included within reviews, under Assessment of methodological quality of included reviews Interventions and comparisons relevant to this overview All prespecified outcomes relevant to this overview (their definitions, and whether they were primary or secondary outcomes in the included reviews) Any other characteristics required to assess and report on review quality; see Quality of included reviews, under Assessment of methodological quality of included reviews Statistical summaries. Summary intervention effects, including pooled effects (e.g. risk ratios (RRs), odds ratios (ORs), mean differences (MDs), as reported in the individual reviews), 95% confidence intervals (CIs), numbers of studies and participants contributing data to each pooled effect, from comparisons, and for outcomes relevant to this overview, including relevant subgroup analyses Information required to assess and report on the quality of evidence for the intervention effects extracted; see Quality of evidence in included reviews, under Assessment of methodological quality of included reviews
Assessment of methodological quality of included reviews
We assessed the methodological quality of each systematic review using the updated AMSTAR 2 (A Measurement Tool to Assess Reviews) instrument ( Shea 2017 ). AMSTAR 2 evaluates the methods used in a review against 16 distinct criteria and assesses the degree to which review methods are unbiased. These criteria are as follows.
Did the research questions and inclusion criteria for the review include the components of PICO (Participants, Intervention, Comparison, Outcomes)? Did the report of the review contain an explicit statement that the review methods were established prior to the conduct of the review, and did the report justify any significant deviations from the protocol? Did the review authors explain their selection of the study designs for inclusion in the review? Did the review authors use a comprehensive literature search strategy? Did the review authors perform study selection in duplicate? Did the review authors perform data extraction in duplicate? Did the review authors provide a list of excluded studies and justify the exclusions? Did the review authors describe the included studies in adequate detail? Did the review authors use a satisfactory technique for assessing the risk of bias (RoB) in individual studies that were included in the review? Did the review authors report on the sources of funding for the studies included in the review? If meta‐analysis was performed, did the review authors use appropriate methods for statistical combination of results? If meta‐analysis was performed, did the review authors assess the potential impact of RoB in individual studies on the results of the meta‐analysis or other evidence synthesis? Did the review authors account for RoB in individual studies when interpreting/discussing the results of the review? Did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review? If they performed quantitative synthesis, did the review authors carry out an adequate investigation of publication bias (small‐study bias) and discuss its likely impact on the results of the review? Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review?
Two overview authors (SM and PS) independently assessed the quality of the included reviews using the online AMSTAR 2 tool ( Shea 2017 ). A third overview author (WdB) verified the assessment. We resolved differences through discussion.
Quality of included studies within reviews
We did not reassess the risk of bias of included studies within reviews. Instead, we reported study quality according to the review authors' assessment. When individual studies were included in two or more Cochrane Reviews, we reported any variation in the review authors' assessments of study quality.
Certainty of evidence in included reviews
We used the GRADE approach, as outlined in the GRADE Handbook to assess the certainty of evidence for the following (clinically relevant) outcomes ( Schünemann 2013 ).
Prevention in at‐risk infants
Death or moderate/severe neurodevelopmental disability Symptomatic PDA confirmed on echocardiogram Proportion of infants requiring surgical ligation or transcatheter occlusion All‐cause mortality any time prior to NICU discharge CLD, defined as oxygen requirement at 36 weeks’ postmenstrual age Necrotising enterocolitis (NEC; stage 2 or greater) Severe intraventricular haemorrhage (IVH; grade III/IV) Moderate/severe neurodevelopmental disability, assessed by a standardised and validated assessment tool, a child developmental specialist, or both, at any age reported (outcome data grouped at 12, 18, and 24 months, if available)
Treatment of asymptomatic infants
Death or moderate/severe neurodevelopmental disability Failure of PDA closure after completion of allocated treatment Proportion of infants requiring surgical ligation or transcatheter occlusion All‐cause mortality any time prior to NICU discharge CLD, defined as oxygen requirement at 36 weeks’ postmenstrual age Necrotising enterocolitis (NEC; stage 2 or greater Severe intraventricular haemorrhage (IVH; grade III/IV) Moderate/severe neurodevelopmental disability, assessed by a standardised and validated assessment tool, a child developmental specialist, or both, at any age reported (outcome data grouped at 12, 18, and 24 months, if available)
Treatment of symptomatic infants
Death or moderate/severe neurodevelopmental disability Failure of PDA closure after completion of allocated treatment Proportion of infants requiring surgical ligation or transcatheter occlusion All‐cause mortality any time prior to NICU discharge CLD, defined as oxygen requirement at 36 weeks’ postmenstrual age Necrotising enterocolitis (NEC; stage 2 or greater) Moderate/severe neurodevelopmental disability, assessed by a standardised and validated assessment tool, a child developmental specialist, or both, at any age reported (outcome data grouped at 12, 18, and 24 months, if available)
We reported the certainty of evidence as assessed by the review authors (who were in the best position to assess certainty given their familiarity with the study level data), using summary of findings tables from the reviews if provided.
The GRADE approach results in an assessment of the certainty of a body of evidence as one of four grades.
High certainty: further research is very unlikely to change our confidence in the estimate of effect Moderate certainty: further research is likely to have an important impact on our confidence in the estimate of effect, and may change the estimate Low certainty: further research is very likely to have an important impact on our confidence in the estimate of effect, and is likely to change the estimate Very low certainty: we are very uncertain about the estimate
Data synthesis
We provided a narrative description of the characteristics of the included Cochrane Reviews. We then summarised the main results of the included reviews by categorising their findings, based on outcomes. We did not attempt to quantitatively synthesise the results using indirect comparison techniques, such as network meta‐analysis.
Subgroup analysis
If the information was available, we planned to separately report outcome results for the following subgroups.
Gestational age (less than 28 weeks, 28 weeks or more) Birthweight (less than 1000 g, 1000 g or more) Timing of initiation of treatment for asymptomatic PDA (less than 24 hours, 24 hours or longer) Timing of initiation of treatment for symptomatic PDA (less than 72 hours, 72 hours or longer) Method used to diagnose a symptomatic PDA (by echocardiographic criteria or only by clinical criteria) Degree of haemodynamic significance of the PDA (based on echocardiographic criteria) | Results
Our search (October 2022) identified 17 relevant Cochrane Reviews and two Cochrane Review protocols under review ( Figure 1 ). Out of these 19 reviews, we included 16 reviews in this overview. The review by Mitra 2022 was identified to be a Bayesian network meta‐analysis of data obtained from studies, most of which were already included in the reviews of the respective pharmacoprophylactic interventions ( Fowlie 2010 ; Jasani 2022 ; Ohlsson 2020a ). Since the latter three reviews were already included in this overview, and to avoid duplication while summarising the results, we excluded the network meta‐analysis by Mitra 2022 . One of the 16 reviews had no included studies and therefore did not contribute to the results ( Anabrees 2011 ).
Description of included reviews
We included the following reviews in this overview ( Table 1 ).
Anabrees 2011 (no included trials) included extremely low birthweight infants (< 1000 g at birth) who received prophylactic indomethacin in the first 24 hours of life, and compared fluid restriction (to achieve at least 10% weight loss in the first week of life) plus indomethacin prophylaxis (starting within the first 24 hours for three doses) versus indomethacin prophylaxis alone. Barrington 2002 (3 RCTs, 75 infants) included preterm infants (≤ 36 weeks' gestation at birth) receiving indomethacin for either PDA closure or prophylaxis, or prophylaxis against intraventricular haemorrhage, during the first month of life. Given all three included trials compared dopamine with indomethacin versus dopamine alone for treatment of symptomatic PDA, we summarised the results from this review under 'Interventions for management of symptomatic PDA'. Bell 2014 (5 trials, 582 infants) included predominantly preterm infants (< 37 weeks' completed gestation), and compared restricted fluid intake versus liberal fluid intake (standard or control therapy). Bhola 2015 (2 RCTs, 128 infants) included preterm infants (< 37 weeks' completed gestation) receiving phototherapy, and compared chest shielding with photo‐opaque material versus no shielding, or chest shielding versus sham shielding (sham shielding defined as a simulated shield that is not photo‐opaque). Brion 2001 (3 RCTs, 70 infants) included preterm infants with a symptomatic PDA who were to receive at least one dose of indomethacin, and compared indomethacin alone versus indomethacin preceded by, or immediately followed with furosemide. Cooke 2003 (3 RCTs, 97 infants) included preterm infants (< 37 weeks' gestation) with an asymptomatic PDA who received treatment after 24 hours of age, and compared indomethacin administered either enterally or parenterally, versus placebo or no treatment. Evans 2021 (14 RCTs, 880 infants) included preterm infants (< 37 weeks' gestational age) and low birthweight infants (< 2500 g) treated for symptomatic PDA, enroled within the first 28 days of life, and compared indomethacin (any dose, any route) versus placebo or no treatment. Fowlie 2010 (19 RCTs, 2872 infants) included preterm infants (< 37 weeks' gestational age), and compared prophylactic (not guided by knowledge of PDA status) treatment with indomethacin given within 24 hours of birth versus either placebo or no treatment Görk 2008 (2 RCTs, 50 infants) included preterm infants (< 37 weeks' estimated gestation) with a symptomatic PDA, diagnosed clinically, or by echocardiographic examination, or both, in the neonatal period (< 28 days), and compared continuous infusion of indomethacin versus indomethacin administered as a bolus dose of no longer than 20 minutes in any dosing schedule, after 24 hours of life, for closure of a symptomatic PDA. Herrera 2007 (5 RCTs, 431 infants) included preterm infants (< 37 weeks' gestation) with a PDA diagnosed on clinical, or echocardiographic examination, or both, and compared indomethacin treatment by any route given as a long course (four or more doses) versus a short course (defined as three or fewer doses). Jasani 2022 (27 RCTs, 2278 infants) included preterm infants (< 37 weeks' gestational age) and low birthweight infants (< 2500 g). The interventions included acetaminophen (given via any route for the purpose of closure of a PDA) administered alone or in combination, in any dose, versus placebo or no intervention, or versus another prostaglandin inhibitor. For prophylactic administration of acetaminophen, eligible infants were required to be within 24 hours of birth echocardiographic confirmation of PDA was not required. For therapeutic administration of acetaminophen, eligible infants were required to have an echocardiographic confirmation of the PDA, regardless of their postnatal age. This review is an update of the Ohlsson 2020 review, and is currently under editorial review. Malviya 2013 (1 RCT, 154 infants) included preterm infants (< 37 weeks' gestational age) or low birthweight infants (< 2500 g) with a symptomatic PDA, diagnosed either clinically or by echocardiography in the neonatal period (less than 28 days), and compared surgical PDA ligation versus medical treatment with cyclooxygenase inhibitors, each used as the initial treatment. Mitra 2020a (14 RCTs, 910 infants) included preterm (< 37 weeks' gestational age) or low birthweight infants (less than 2500 g) with a haemodynamically significant PDA diagnosed clinically or via echocardiography (or both) in the first seven days of life, and compared early treatment (treatment of a PDA by seven days of age) versus expectant management, and very early treatment (treatment of a PDA by 72 hours of age) versus expectant management. Mosalli 2008 (1 RCT, 84 infants) included infants < 28 weeks' gestation or < 1000 g at birth who were on assisted ventilation, or supplemental oxygen, or both, without clinical signs of a haemodynamically significant PDA, and compared prophylactic surgical ligation of the PDA (i.e. procedure done during the first 72 hours) versus no prophylactic intervention or medical prophylaxis (cyclooxygenase inhibitors) without dose specification. Ohlsson 2020a (9 RCTs, 1070 infants) included preterm infants (< 37 weeks' gestational age) and low birthweight infants (< 2500 g) in their first 72 hours of life (three days), and compared prophylactic use of ibuprofen for prevention of PDA versus control, consisting of no intervention, placebo, other cyclooxygenase inhibitor drugs (indomethacin, mefenamic acid), or rescue treatment with ibuprofen. Ohlsson 2020b (39 RCTs, 2843 infants) included preterm infants (< 37 weeks' gestational age) or low birthweight infants (< 2500 g) with a PDA, diagnosed either clinically or by echocardiography in the neonatal period (less than 28 days), and compared ibuprofen (in different routes and dosages) versus indomethacin, other cyclo‐oxygenase inhibitor(s), placebo, or no intervention.
Methodological quality of included reviews
The AMSTAR 2 assessment of the quality of the included reviews is presented in Table 2 .
We assessed two reviews at high quality ( Mitra 2020a ; Mosalli 2008 ), seven reviews at moderate quality ( Anabrees 2011 ; Bhola 2015 ; Evans 2021 ; Jasani 2022 ; Malviya 2013 ; Ohlsson 2020a Ohlsson 2020b ), five at low quality ( Barrington 2002 ; Bell 2014 ; Brion 2001 ; Fowlie 2010 ; Görk 2008 ), and two reviews at critically low quality ( Cooke 2003 ; Herrera 2007 ).
Risk of bias in the included trials, as assessed by the respective review authors, is reported in Table 3 . The certainty of the evidence for the primary outcomes of this overview (as available from the respective reviews) is summarised in Table 4 and Table 5 .
Effect of interventions
Interventions (pharmacological or surgical) for prevention of PDA and related complications in preterm infants
The results for the following outcomes are summarised in Table 6
Severe intraventricular haemorrhage (IVH; grade III/IV)
Five Cochrane Reviews reported on the outcome of severe IVH. They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that compared to control, prophylactic indomethacin reduced severe IVH (risk ratio (RR) 0.66, 95% confidence interval (CI) 0.53 to 0.82; 14 RCTs, 2588 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that compared to placebo or no intervention, prophylactic ibuprofen possibly reduced severe IVH (RR 0.67, 95% CI 0.45 to 1.00; 7 RCTs, 925 infants; moderate‐certainty evidence).
Prophylactic acetaminophen
Prophylactic acetaminophen versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between prophylactic acetaminophen and placebo or no intervention for severe IVH (RR 1.09, 95% CI 0.07 to 16.39; 1 RCT, 48 infants).
Prophylactic surgical PDA ligation
Prophylactic surgical PDA ligation versus control (prophylactic cyclooxygenase inhibitor drugs only): the review by Mosalli 2008 showed that there was no evidence of a difference between prophylactic surgical PDA ligation and control (prophylactic cyclooxygenase inhibitor drugs) for severe IVH (RR 0.81, 95% CI 0.52 to 1.28; 1 RCT, 76 infants).
Chest shielding during phototherapy
Chest shielding during phototherapy versus control: the review by Bhola 2015 showed that there was no evidence of a difference between chest shielding during phototherapy and control for severe IVH (RR 0.64, 95% CI 0.22 to 1.85; 2 RCTs, 128 infants).
Death or moderate/severe neurodevelopmental disability
Only one Cochrane Review reported on the composite outcome of death or moderate/severe neurodevelopmental disability. It included the following intervention.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that there was no evidence of a difference between prophylactic indomethacin and control for the composite outcome of death or moderate/severe neurodevelopmental disability (RR 1.02, 95% CI 0.90 to 1.15; 3 RCTs, 1491 infants).
PDA confirmed on echocardiogram
Five Cochrane Reviews reported on the outcome of echocardiogram‐confirmed PDA post‐prophylaxis. They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that compared to control, prophylactic indomethacin reduced the presence of PDA (RR 0.29, 95% CI 0.22 to 0.38; 7 RCTs, 965 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that compared to placebo or no intervention, prophylactic ibuprofen reduced the presence of PDA (RR 0.39, 95% CI 0.31 to 0.48; 9 RCTs, 1029 infants; moderate‐certainty evidence).
Prophylactic acetaminophen
Prophylactic acetaminophen versus placebo: the review by Jasani 2022 showed that compared to placebo or no intervention, prophylactic acetaminophen reduced the presence of PDA (RR 0.27, 95% CI 0.18 to 0.42; 3 RCTs, 240 infants).
Chest shielding during phototherapy
Chest shielding during phototherapy versus control: the review by Bhola 2015 showed that there was no evidence of a difference between chest shielding during phototherapy and control for any PDA detected by echocardiography (RR 0.92, 95% CI 0.52 to 1.64; 1 RCT, 54 infants), or detection of a haemodynamically significant PDA (RR 0.23, 95% CI 0.05 to 1.01; 1 RCT, 74 infants).
Restriction of fluid intake
Restricted versus liberal fluid intake: the review by Bell 2014 showed that compared to liberal fluid intake, restriction of predominantly intravenous (IV) fluids reduced the persistence of PDA (RR 0.52, 95% CI 0.37 to 0.73; 4 RCTs, 526 infants).
Proportion of infants receiving open‐label medical treatment
Two Cochrane Reviews reported on the outcome of receipt of open‐label medical treatment for PDA. They included the following interventions.
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that compared to placebo or no intervention, prophylactic ibuprofen reduced subsequent open‐label therapy for PDA (RR 0.17, 95% CI 0.11 to 0.26; 6 RCTs, 776 infants).
Chest shielding during phototherapy
Chest shielding during phototherapy versus control: the review by Bhola 2015 showed that compared to control, chest shielding during phototherapy reduced the subsequent need for open‐label treatment for PDA (RR 0.12, 95% CI 0.02 to 0.88; 1 RCT, 74 infants).
Proportion of infants requiring surgical ligation or transcatheter occlusion
Three Cochrane Reviews reported on the outcome invasive PDA closure by surgical ligation or transcatheter occlusion. They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that compared to control, prophylactic indomethacin reduced the need for invasive PDA closure (RR 0.51, 95% CI 0.37 to 0.71; 8 RCTs, 1791 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that compared to placebo or no intervention, prophylactic ibuprofen reduced the need for invasive PDA closure (RR 0.46, 95% CI 0.22 to 0.96; 7 RCTs, 925 infants; moderate‐certainty evidence).
Chest shielding during phototherapy
Chest shielding during phototherapy versus control: the review by Bhola 2015 showed that there was no evidence of a difference between chest shielding during phototherapy and control for invasive PDA closure (RR 0.35, 95% CI 0.01 to 8.36; 1 RCT, 74 infants).
Chronic lung disease (CLD)
Five Cochrane Reviews reported on the outcome of CLD (all definitions included). They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that there was no evidence of a difference between prophylactic indomethacin and control for CLD, defined as oxygen requirement at 28 postnatal days (RR 1.08, 95% CI 0.92 to 1.26; 9 RCTs, 1022 infants), or CLD, defined as oxygen requirement at 36 weeks' postmenstrual age (RR 1.06, 95% CI 0.92 to 1.22; 1 RCT, 999 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that there was no evidence of a difference between prophylactic ibuprofen and placebo or no intervention for CLD, defined as oxygen requirement at 28 postnatal days (RR 0.88, 95% CI 0.32 to 2.42; 1 RCT, 41 infants), CLD, defined as oxygen requirement at 36 weeks' postmenstrual age (RR 1.06, 95% CI 0.89 to 1.26; 5 RCTs, 817 infants), or CLD with an unspecified age at diagnosis (RR 0.94, 95% CI 0.51 to 1.72; 2 RCTs, 99 infants).
Prophylactic acetaminophen
Prophylactic acetaminophen versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between prophylactic acetaminophen and placebo or no intervention for CLD, defined as oxygen requirement at 28 postnatal days (RR 0.69, 95% CI 0.32 to 1.48; 1 RCT, 48 infants), or CLD, defined as oxygen requirement at 36 weeks' postmenstrual age (RR 0.36, 95% CI 0.02 to 8.45; 1 RCT, 48 infants).
Prophylactic surgical PDA ligation
Prophylactic surgical PDA ligation versus control (prophylactic cyclooxygenase inhibitor drugs only): the review by Mosalli 2008 showed that there was no evidence of a difference between prophylactic surgical PDA ligation and control for CLD, defined as oxygen requirement at 36 weeks' postmenstrual age (RR 1.07, 95% CI 0.68 to 1.69; 1 RCT, 48 infants).
Restriction of fluid intake
Restricted versus liberal fluid intake: the review by Bell 2014 showed that there was no evidence of a difference between restriction of predominantly IV fluids and liberal fluid intake for CLD (no definition specified for CLD; RR 0.85, 95% CI 0.63 to 1.14; 4 RCTs, 526 infants).
Intraventricular haemorrhage (IVH; grades I to IV)
Four Cochrane Reviews reported on the outcome of IVH (grades I to IV). They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that compared to control, prophylactic indomethacin reduced all grades of IVH (RR 0.88, 95% CI 0.80 to 0.98; 14 RCTs, 2532 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that there was no evidence of a difference between prophylactic ibuprofen and placebo or no intervention for all grades of IVH (RR 0.96, 95% CI 0.78 to 1.17; 6 RCTs, 901 infants).
Chest shielding during phototherapy
Chest shielding during phototherapy versus control: the review by Bhola 2015 showed that there was no evidence of a difference between chest shielding during phototherapy and control for all grades of IVH (RR 0.53, 95% CI 0.10 to 2.71; 1 RCT, 74 infants).
Restriction of fluid intake
Restricted versus liberal fluid intake: the review by Bell 2014 showed that there was no evidence of a difference between restriction of predominantly IV fluids and liberal fluid intake with respect to all grades of IVH (RR 0.74, 95% CI 0.48 to 1.14; 3 RCTs, 356 infants).
Pulmonary haemorrhage
One Cochrane Review reported on the outcome pulmonary haemorrhage. It included the following intervention.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that there was no evidence of a difference between prophylactic indomethacin and control for pulmonary haemorrhage (RR 0.84, 95% CI 0.66 to 1.07; 4 RCTs, 1591 infants).
Retinopathy of prematurity (ROP)
Five Cochrane Reviews reported on the outcome of ROP. They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that there was no evidence of a difference between prophylactic indomethacin and control for any stage of ROP (RR 1.02, 95% CI 0.92 to 1.12; 5 RCTs, 1571 infants), or for severe ROP (RR 1.75, 95% CI 0.92 to 3.34; 2 RCTs, 289 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that there was no evidence of a difference between prophylactic ibuprofen and placebo or no intervention for ROP (RR 1.01, 95% CI 0.73 to 1.38; 5 RCTs, 369 infants).
Prophylactic acetaminophen
Prophylactic acetaminophen versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between prophylactic acetaminophen and placebo or no intervention for ROP (defined as any ROP that required treatment; RR 3.25, 95% CI 0.14 to 76.01; 1 RCT, 48 infants).
Prophylactic surgical PDA ligation
Prophylactic surgical PDA ligation versus control (prophylactic cyclooxygenase inhibitor drugs only): the review by Mosalli 2008 showed that there was no difference between prophylactic surgical PDA ligation and control for any stage of ROP (RR 0.67, 95% CI 0.31 to 1.43; 1 RCT, 43 infants), or for severe ROP (RR 0.32, 95% CI 0.04 to 2.82; 1 RCT, 43 infants).
Chest shielding during phototherapy
Chest shielding during phototherapy versus control: the review by Bhola 2015 showed that there was no evidence of a difference between chest shielding during phototherapy and control for any stage of ROP (RR 0.53, 95% CI 0.10 to 2.71; 1 RCT, 74 infants).
Duration of hospitalisation (days)
Two Cochrane Reviews reported on duration of hospitalisation. They included the following interventions.
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that there was no evidence of a difference between prophylactic ibuprofen and placebo or no intervention for duration of hospitalisation (mean difference (MD) 1.30 days, 95% CI ‐3.07 to 5.67; 6 RCTs, 447 infants).
Chest shielding during phototherapy
Chest shielding during phototherapy versus control: the review by Bhola 2015 showed that there was no evidence of a difference between chest shielding during phototherapy and control for duration of hospitalisation (MD ‐8.05 days, 95% CI ‐18.04 to 1.94; 2 RCTs, 128 infants).
Moderate/severe neurodevelopmental disability
One Cochrane Review reported on the outcome of moderate/severe neurodevelopmental disability. It included the following intervention.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that there was no evidence of a difference between prophylactic indomethacin and control for moderate/severe neurodevelopmental disability (RR 0.96, 95% CI 0.79 to 1.17; 3 RCTs, 1286 infants).
All‐cause mortality
Six Cochrane Reviews reported on mortality (all time points included). They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that there was no evidence of a difference between prophylactic indomethacin and control for mortality during the hospital stay (RR 0.82, 95% CI 0.65 to 1.03; 17 RCTs, 1567 infants), or mortality at the latest follow‐up (RR 0.96, 95% CI 0.81 to 1.12; 18 RCTs, 2769 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that there was no evidence of a difference between prophylactic ibuprofen and placebo or no intervention for neonatal mortality (RR 0.93, 95% CI 0.50 to 1.74; 6 RCTs, 342 infants), mortality during the hospital stay (RR 0.90, 95% CI 0.62 to 1.30; 4 RCTs, 700 infants), or mortality before 36 weeks' postmenstrual age (RR 0.96, 95% CI 0.56 to 1.66; 1 RCT, 131 infants).
Prophylactic acetaminophen
Prophylactic acetaminophen: the review by Jasani 2022 showed that there was no evidence of a difference between prophylactic acetaminophen and control for mortality during the hospital stay (RR 0.59, 95% CI 0.24 to 1.44; 3 RCTs, 240 infants; low‐certainty evidence).
Prophylactic surgical PDA ligation
Prophylactic surgical PDA ligation versus control (prophylactic cyclooxygenase inhibitor drugs only): the review by Mosalli 2008 showed that there was no evidence of a difference between prophylactic surgical PDA ligation and control for neonatal mortality (< 28 days of life; RR 0.88, 95% CI 0.53 to 1.45; 1 RCT, 84 infants), or for mortality at one year (RR 1.06, 95% CI 0.75 to 1.49; 1 RCT, 84 infants).
Chest shielding during phototherapy
Chest shielding during phototherapy versus control: the review by Bhola 2015 showed that there was no evidence of a difference between chest shielding during phototherapy and control for mortality during the hospital stay (RR 1.68, 95% CI 0.75 to 3.78; 2 RCTs, 128 infants), or for neonatal mortality (< 28 days of age; RR 1.06, 95% CI 0.16 to 7.10; 1 RCT, 74 infants).
Restriction of fluid intake
Restricted versus liberal fluid intake: the review by Bell 2014 showed that there was no evidence of a difference between restriction of predominantly IV fluids and liberal fluid intake for mortality before discharge (RR 0.81, 95% CI 0.54 to 1.23; 5 RCTs, 582 infants).
Necrotising enterocolitis (NEC)
Five Cochrane Reviews reported on NEC. They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that there was no evidence of a difference between prophylactic indomethacin and control for NEC (RR 1.09, 95% CI 0.82 to 1.46; 12 RCTs, 2401 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that there was no evidence of a difference between prophylactic ibuprofen and placebo or no intervention for NEC (RR 0.96, 95% CI 0.61 to 1.50; 9 RCTs, 1028 infants; moderate‐certainty evidence).
Prophylactic acetaminophen
Prophylactic acetaminophen versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between prophylactic acetaminophen and placebo or no intervention for NEC (RR 0.36, 95% CI 0.02 to 8.45; 1 RCT, 48 infants).
Prophylactic surgical PDA ligation
Prophylactic surgical PDA ligation versus control (prophylactic cyclooxygenase inhibitor drugs only): the review by Mosalli 2008 showed that compared to control prophylactic cyclooxygenase inhibitor drugs, prophylactic surgical PDA ligation reduced NEC (RR 0.25, 95% CI 0.08 to 0.83; 1 RCT, 84 infants).
Restriction of fluid intake
Restricted versus liberal fluid intake: the review by Bell 2014 showed that compared to liberal fluid intake, restriction of predominantly IV fluids reduced NEC (RR 0.43, 95% CI 0.21 to 0.87; 4 RCTs, 526 infants).
Gastrointestinal perforation
Two Cochrane Reviews reported on gastrointestinal perforation. They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that there was no evidence of a difference between prophylactic indomethacin and control for gastrointestinal perforation (RR 1.13, 95% CI 0.71 to 1.79; 1 RCT, 1202 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that there was no evidence of a difference between prophylactic ibuprofen and placebo or no intervention for gastrointestinal perforation (RR 4.88, 95% CI 0.87 to 27.36; 2 RCTs, 167 infants).
Gastrointestinal bleeding
One Cochrane Review reported on gastrointestinal bleeding. It included the following intervention.
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that compared to placebo or no intervention, gastrointestinal bleeding increased with prophylactic ibuprofen (RR 2.05, 95% CI 1.19 to 3.51; 5 RCTs, 282 infants; low‐certainty evidence).
Oliguria
Three Cochrane Reviews reported on the outcome of oliguria. They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo: the review by Fowlie 2010 showed that compared to control, prophylactic indomethacin increased oliguria (RR 1.90, 95% CI 1.45 to 2.47; 8 RCTs, 2115 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that compared to placebo or no intervention, prophylactic ibuprofen increased oliguria (RR 1.45, 95% CI 1.04 to 2.02; 4 RCTs, 747 infants; high‐certainty evidence).
Prophylactic acetaminophen
Prophylactic acetaminophen versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between prophylactic acetaminophen and placebo or no intervention on oliguria (RR 0.78, 95% CI 0.29 to 2.11; 1 RCT, 48 infants).
Serum/plasma levels of creatinine after treatment
One Cochrane Review reported on this outcome. It included the following intervention.
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that there was no evidence of a difference between prophylactic ibuprofen and placebo or no intervention on serum creatinine levels post‐treatment (weighted mean difference (WMD)** 0.09 mg/dL, 95% CI 0.05 to 0.13; 6 RCTs, 800 infants; low‐certainty evidence).
** Please note that Cochrane now uses mean difference.
Increase in serum/plasma levels of creatinine after treatment
Two Cochrane Reviews reported on this outcome. They included the following interventions.
Prophylactic indomethacin
Prophylactic indomethacin versus placebo. The review by Fowlie 2010 showed that there was no evidence of a difference between prophylactic indomethacin and control on serum/plasma levels of creatinine after treatment (RR 1.09, 95% CI 0.47 to 1.79; 4 RCTs, 618 infants).
Prophylactic ibuprofen
Prophylactic ibuprofen versus placebo: the review by Ohlsson 2020a showed that compared to placebo or no intervention, prophylactic ibuprofen increased serum/plasma levels of creatinine after treatment (RR 3.70, 95% CI 1.05 to 12.98; 2 RCTs, 285 infants).
Serum/plasma levels of bilirubin after treatment
One Cochrane Review reported on this outcome. It included the following intervention.
Prophylactic acetaminophen
Prophylactic acetaminophen versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between prophylactic acetaminophen and placebo or no intervention on serum/plasma levels of bilirubin after treatment (MD 1 μmol/L, 95% CI ‐10.35 to 12.35; 1 RCT, 48 infants).
Interventions (pharmacological or surgical) for management of asymptomatic PDA in preterm infants
Only one Cochrane Review, which compared indomethacin for asymptomatic PDA versus placebo addressed this objective ( Cooke 2003 ). It included the following outcomes ( Table 7 ).
Symptomatic PDA
Compared to placebo, treatment of asymptomatic PDA with indomethacin reduced the development of symptomatic PDA post‐treatment RR 0.36, 95% CI 0.19 to 0.68; (3 RCTs, 97 infants).
Proportion of infants requiring invasive PDA closure (surgical ligation or transcatheter occlusion)
There was no evidence of a difference between the treatment of asymptomatic PDA with indomethacin and placebo on the need for invasive PDA closure (RR 0.45, 95% CI 0.17 to 1.21; 2 RCTs, 73 infants).
Chronic lung disease (CLD)
There was no evidence of a difference between treatment of asymptomatic PDA with indomethacin and placebo for CLD (RR 0.91, 95% CI 0.62 to 1.35; 2 RCTs, 45 infants).
Retinopathy of prematurity (ROP)
There was no evidence of a difference between treatment of asymptomatic PDA with indomethacin and placebo for any stage of ROP (RR 0.68, 95% CI 0.26 to 1.78; 2 RCTs, 55 infants).
Duration of hospitalisation (days)
There was no evidence of a difference between treatment of asymptomatic PDA with indomethacin and placebo on the duration of hospitalisation (MD 11 days, 95% CI ‐45.21 to 23.21; 1 RCT, 26 infants).
Mortality
There was no evidence of a difference between treatment of asymptomatic PDA with indomethacin and placebo for mortality (RR 1.32, 95% CI 0.45 to 3.86; 2 RCTs, 73 infants).
Necrotising enterocolitis (NEC)
There was no evidence of a difference between treatment of asymptomatic PDA with indomethacin and placebo on NEC (RR 0.41, 95% CI 0.05 to 3.68; 1 RCT, 47 infants).
Interventions (pharmacological or surgical) for management of symptomatic (haemodynamically significant) PDA in preterm infants
Failure of PDA closure after completion of allocated treatment
Eight Cochrane Reviews reported on the outcome of failure of PDA closure. They included the following interventions ( Table 8 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that compared to placebo or no treatment, indomethacin reduced failure of PDA closure post‐treatment (RR 0.30, 95% CI 0.23 to 0.38; 10 RCTs, 654 infants; high‐certainty evidence).
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that there was no evidence of a difference between a prolonged and short course of indomethacin on failure of PDA closure post‐treatment (RR 0.82, 95% CI 0.51 to 1.33; 4 RCTs, 361 infants).
Continuous infusion versus intermittent bolus of indomethacin: the review by Görk 2008 showed that there was no evidence of a difference between continuous infusion and intermittent bolus of indomethacin on failure of PDA closure post‐treatment by day two (RR 1.57, 95% CI 0.54 to 4.60; 2 RCTs, 48 infants), or by day five (RR 2.77, 95% CI 0.33 to 23.14; 1 RCT, 25 infants).
Ibuprofen
Intravenous ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that compared to placebo or no treatment, IV ibuprofen reduced failure of PDA closure post‐treatment (RR 0.62, 95% CI 0.44 to 0.86; 2 RCTs, 206 infants; moderate‐certainty evidence).
Oral ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that compared to placebo or no treatment, oral ibuprofen reduced failure of PDA closure post‐treatment (RR 0.26, 95% CI 0.11 to 0.62; 1 RCT, 64 infants).
Ibuprofen (IV or oral) versus indomethacin (IV or oral): the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin on failure of PDA closure post‐treatment (RR 1.07, 95% CI 0.92 to 1.24; 24 RCTs, 1590 infants; moderate‐certainty evidence).
Oral ibuprofen versus indomethacin (IV or oral): the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin on failure of PDA closure post‐treatment (RR 0.96, 95% CI 0.73 to 1.27; 8 RCTs, 272 infants; low‐certainty evidence).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that compared to IV ibuprofen, oral ibuprofen reduced failure of PDA closure post‐treatment (RR 0.38, 95% CI 0.26 to 0.56; 5 RCTs, 406 infants; moderate‐certainty evidence).
High‐dose ibuprofen versus standard dose ibuprofen: the review by Ohlsson 2020b showed that compared to standard dose ibuprofen, high dose ibuprofen reduced failure of PDA closure post‐treatment (RR 0.37, 95% CI 0.22 to 0.61; 3 RCTs, 190 infants; moderate‐certainty evidence).
Echocardiogram‐guided versus standard IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between echocardiogram‐guided and standard IV ibuprofen on failure of PDA closure post‐treatment (RR 1.31, 95% CI 0.44 to 3.91; 1 RCT, 49 infants).
Continuous infusion of ibuprofen versus intermittent boluses of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion of ibuprofen and intermittent boluses of ibuprofen on failure of PDA closure post‐treatment (RR 1.18, 95% CI 0.88 to 1.5; 1 RCT, 111 infants).
Rectal versus oral ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between rectal and oral ibuprofen on failure of PDA closure post‐treatment (RR 0.83, 95% CI 0.28 to 2.4; 1 RCT, 72 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen on failure of PDA closure post‐treatment (RR 1.02, 95% CI 0.88 to 1.18; 18 RCTs, 1535 infants; moderate‐certainty evidence).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and indomethacin on failure of PDA closure post‐treatment (RR 1.02, 95% CI 0.78 to 1.33; 4 RCTs, 380 infants; low‐certainty evidence).
Early acetaminophen versus placebo: the review by Jasani 2022 showed that compared to placebo, early acetaminophen reduced failure of PDA closure post‐treatment (RR 0.35, 95% CI 0.23 to 0.53; 2 RCTs, 127 infants; low‐certainty evidence).
Late acetaminophen versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between late acetaminophen and placebo on failure of PDA closure post‐treatment (RR 0.85, 95% CI 0.72 to 1.01; 1 RCT, 55 infants; low‐certainty evidence).
Acetaminophen and ibuprofen combination therapy versus ibuprofen alone: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen combination therapy and ibuprofen alone on failure of PDA closure post‐treatment (RR 0.77, 95% CI 0.43 to 1.36; 2 RCTs, 111 infants; low‐certainty evidence).
Surgical ligation
Surgical PDA ligation versus medical treatment with indomethacin: the review by Malviya 2013 showed that compared to medical therapy, surgical ligation reduced failure of PDA closure post‐treatment (RR 0.04, 95% CI 0.01 to 0.27; 1 RCT, 154 infants).
Adjunct therapies
Furosemide versus control: the review by Brion 2001 showed that there was no evidence of a difference between the combination of furosemide and indomethacin versus indomethacin alone on failure of PDA closure post‐treatment (RR 1.25, 95% CI 0.62 to 2.52; 3 RCTs, 70 infants).
Dopamine versus control: the review by Barrington 2002 showed that there was no evidence of a difference between the combination of dopamine and indomethacin versus indomethacin alone on failure of PDA closure post‐treatment (RR 1.11, 95% CI 0.56 to 2.19; 3 RCTs, 74 infants).
Death or moderate/severe neurodevelopmental disability
No reviews reported on the combined outcome of death or moderate/severe neurodevelopmental disability.
Proportion of infants receiving open‐label medical treatment
Four Cochrane Reviews reported on the outcome of receipt of open‐label treatment. They included the following interventions ( Table 9 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that compared to placebo or no treatment, indomethacin reduced the need for open‐label treatment (RR 0.35, 95% CI 0.23 to 0.54; 6 RCTs, 211 infants).
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that there was no evidence of a difference between prolonged and short course of indomethacin on the need for open‐label treatment (RR 0.95, 95% CI 0.67 to 1.34; 5 RCTs, 431 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with indomethacin and expectant management on the need for open‐label treatment (RR 0.33, 95% CI 0.01 to 7.91; 1 RCT, 127 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management on the need for open‐label treatment (RR 0.52, 95% CI 0.26 to 1.02; 1 RCT, 92 infants).
Ibuprofen
Intravenous ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that there was no evidence of a difference between IV ibuprofen and placebo or no treatment on the need for open‐label treatment (RR 1.20, 95% CI 0.76 to 1.90; 7 RCTs, 241 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with ibuprofen and expectant management on the need for open‐label treatment (RR 0.66, 95% CI 0.27 to 1.60; 1 RCT, 105 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with ibuprofen and expectant management on the need for open‐label treatment (RR 1.06, 95% CI 0.07 to 16.26; 1 RCT, 72 infants).
Proportion of infants requiring invasive PDA closure (surgical ligation or transcatheter occlusion)
Five reviews reported on the outcome of invasive PDA closure. They included the following interventions ( Table 10 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment on the need for invasive PDA closure (RR 0.66, 95% CI 0.33 to 1.29; 6 RCTs, 275 infants; moderate‐certainty evidence).
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that there was no evidence of a difference between prolonged and short course of indomethacin on the need for invasive PDA closure (RR 0.86, 95% CI 0.49 to 1.51; 4 RCTs, 310 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with indomethacin and expectant management on the need for invasive PDA closure (RR 0.74, 95% CI 0.17 to 3.17; 1 RCT, 127 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management on the need for invasive PDA closure (RR 0.54, 95% CI 0.07 to 3.93; 3 RCTs, 161 infants).
Ibuprofen
Intravenous ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that there was no evidence of a difference between IV ibuprofen and placebo or no treatment on the need for invasive PDA closure (RR 1.89, 95% CI 0.91 to 3.93; 1 RCT, 134 infants).
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin on the need for invasive PDA closure (RR 1.06, 95% CI 0.81 to 1.39; 16 RCTs, 1275 infants; moderate‐certainty evidence).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin on the need for invasive PDA closure (RR 0.93, 95% CI 0.50 to 1.74; 4 RCTs, 174 infants; low‐certainty evidence).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen on the need for invasive PDA closure (RR 0.41, 95% CI 0.14 to 1.21; 5 RCTs, 406 infants; moderate‐certainty evidence).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen on the need for invasive PDA closure (RR 1.00, 95% CI 0.15 to 6.71; 1 RCT, 70 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with ibuprofen and expectant management on the need for invasive PDA closure (RR 1.14, 95% CI 0.66 to 1.96; 3 RCTs, 305 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with ibuprofen and expectant management on the need for invasive PDA closure (RR 1.00, 95% CI 0.36 to 2.75; 1 RCT, 60 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that compared to intermittent bolus of ibuprofen, continuous infusion of ibuprofen reduced the need for invasive PDA closure (RR 0.28, 95% CI 0.08 to 0.94; 1 RCT, 111 infants).
Rectal versus oral ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between rectal ibuprofen and oral ibuprofen on the need for invasive PDA closure (RR 1.00, 95% CI 0.15 to 6.72; 1 RCT, 72 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen on the need for invasive PDA closure (RR 0.61, 95% CI 0.34 to 1.08; 6 RCTs, 603 infants).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and indomethacin on the need for invasive PDA closure (RR 1.31, 95% CI 0.72 to 2.38; 2 RCTs, 237 infants).
Late acetaminophen (initiated on day 14 or later) versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between late acetaminophen and placebo on the need for invasive PDA closure (RR 3.11, 95% CI 0.13 to 73.11; 1 RCT, 55 infants).
Proportion of infants receiving open‐label medical or surgical treatment in the placebo or no treatment group
No reviews reported on this outcome.
Chronic lung disease
Six reviews reported on the outcome of CLD (all definitions included). They include the following interventions ( Table 11 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment for CLD defined as the need for supplemental oxygen at 36 weeks' postmenstrual age (RR 0.80, 95% CI 0.41 to 1.55; 1 RCT, 92 infants; low‐certainty evidence), or for CLD defined as the need for supplemental oxygen at 28 days of age (RR 1.45, 95% CI 0.60 to 3.51; 1 RCT, 55 infants).
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that there was no evidence of a difference between prolonged and short course of indomethacin for CLD (RR 1.35, 95% CI 0.78 to 2.36; 2 RCTs, 201 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with indomethacin and expectant management for CLD (RR 0.84, 95% CI 0.52 to 1.37; 2 RCTs, 168 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management for CLD (RR 1.06, 95% CI 0.61 to 1.83; 4 RCTs, 188 infants).
Ibuprofen
Intravenous ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that there was no evidence of a difference between IV ibuprofen and placebo or no treatment for CLD defined as the need for supplemental oxygen at 36 weeks' postmenstrual age (RR 0.99, 95% CI 0.88 to 1.11; 1 RCT, 98 infants), or for CLD defined as the need for supplemental oxygen at 28 days of age (RR 1.09, 95% CI 0.95 to 1.26; 1 RCT, 130 infants).
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin for CLD defined as the need for supplemental oxygen at 36 weeks' postmenstrual age (RR 1.12, 95% CI 0.77 to 1.61; 3 RCTs, 357 infants), or for CLD defined as the need for supplemental oxygen at 28 days of age (RR 1.20, 95% CI 0.93 to 1.55; 5 RCTs, 292 infants).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin for CLD defined as the need for supplemental oxygen at 28 days of age (RD ‐0.07, 95% CI ‐0.42 to 0.29; 1 RCT, 30 infants), or for CLD (no definition specified for CLD; RD ‐0.00, 95% CI ‐0.44 to 0.44; 1 RCT, 18 infants).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen for CLD (RR 0.82, 95% CI 0.56 to 1.20; 3 RCTs, 236 infants).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen for CLD (RR 1.60, 95% CI 0.85 to 3.02; 1 RCT, 70 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with ibuprofen and expectant management for CLD (RR 0.97, 95% CI 0.56 to 1.29; 2 RCTs, 171 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that compared to expectant management, very early treatment reduced CLD (RR 0.54, 95% CI 0.35 to 0.83; 2 RCTs, 124 infants).
Echocardiogram‐guided versus standard IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between echocardiogram‐guided and standard IV ibuprofen for CLD (RR 1.35, 95% CI 0.53 to 3.44; 1 RCT, 49 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion and intermittent bolus of ibuprofen for CLD (RR 1.1, 95% CI 0.55 to 2.2; 1 RCT, 111 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen for CLD defined as need for supplemental oxygen at 36 weeks' postmenstrual age (RR 0.79, 95% CI 0.45 to 1.38; 2 RCTs, 141 infants), for moderate/severe CLD (RR 0.80, 95% CI 0.22 to 2.87; 1 RCT, 160 infants), or for severe CLD (RR 0.62, 95% CI 0.32 to 1.23; 1 RCT, 90 infants).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and indomethacin for CLD (RR 1.16, 95% CI 0.77 to 1.75; 2 RCTs, 94 infants).
Late acetaminophen (initiated on day 14 or later) versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between late acetaminophen and placebo for CLD (RR 1.04, 95% CI 0.07 to 15.76; 1 RCT, 55 infants).
Acetaminophen and ibuprofen combination therapy versus ibuprofen alone: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen combination therapy and ibuprofen alone for CLD (RR 0.80, 95% CI 0.28 to 2.27; 1 RCT, 24 infants).
Surgical ligation
Surgical PDA ligation versus medical treatment with indomethacin: the review by Malviya 2013 showed that there was no evidence of a difference between surgical PDA ligation and medical therapy for CLD (RR 1.28, 95% CI 0.83 to 1.98; 1 RCT, 154 infants).
Pulmonary haemorrhage
Four Cochane Reviews reported on the outcome of pulmonary haemorrhage. They included the following interventions ( Table 12 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment for pulmonary haemorrhage (RR 0.40, 95% CI 0.14 to 1.16; 1 RCT, 92 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management for pulmonary haemorrhage (RR 0.59, 95% CI 0.22 to 1.53; 2 RCTs, 136 infants).
Ibuprofen
Intravenous ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that there was no evidence of a difference between IV ibuprofen and placebo or no treatment for pulmonary haemorrhage (RR 0.25, 95% CI 0.03 to 2.18; 1 RCT, 136 infants).
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin for pulmonary haemorrhage (RR 0.91, 95% CI 0.40 to 2.04; 4 RCTs, 303 infants).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin for pulmonary haemorrhage (RD ‐0.22, 95% CI ‐0.51 to 0.07; 1 RCT, 21 infants).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen for pulmonary haemorrhage (RR 0.14, 95% CI 0.01 to 2.52; 1 RCT, 70 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment and expectant management for pulmonary haemorrhage (RR 0.59, 95% CI 0.24 to 1.49; 2 RCTs, 124 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen for pulmonary haemorrhage (RR 0.87, 95% CI 0.36 to 2.095 RCTs, 442 infants).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and indomethacin for pulmonary haemorrhage (RR 0.77, 95% CI 0.28 to 2.103 RCTs, 347 infants).
Late acetaminophen (initiated on or after day 14) versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between late acetaminophen and placebo for pulmonary haemorrhage (RR 2.07, 95% CI 0.20 to 21.56;1 RCT, 55 infants).
Severe intraventricular haemorrhage (grades III/IV)
Five Cochrane Reviews reported on the outcome of severe IVH. They include the following interventions ( Table 13 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment for severe IVH (RR 0.33, 95% CI 0.01 to 7.45; 1 RCT, 24 infants).
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that there was no evidence of a difference between a prolonged and short course of indomethacin for severe IVH (RR 0.64, 95% CI 0.36 to 1.12; 4 RCTs, 310 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management for severe IVH (RR 1.00, 95% CI 0.07 to 15; 1 RCT, 44 infants).
Ibuprofen
Intravenous ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that there was no evidence of a difference between IV ibuprofen and placebo or no treatment for severe IVH (RR 1.00, 95% CI 0.47 to 2.15; 1 RCT, 134 infants).
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin for severe IVH (RR 1.05, 95% CI 0.68 to 1.63; 10 RCTs, 798 infants).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin for severe IVH (RD ‐0.04, 95% CI ‐0.14 to 0.05; 2 RCT, 124 infants).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen for severe IVH (RR 0.50, 95% CI 0.10 to 2.56; 1 RCT, 70 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with ibuprofen and expectant management for severe IVH (RR 0.83, 95% CI 0.32 to 2.16; 2 RCTs, 171 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with ibuprofen and expectant management for severe IVH (RR 0.67, 95% CI 0.11 to 3.98;2 RCTs, 124 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion and intermittent bolus of ibuprofen for severe IVH (RR 0.34, 95% CI 0.01 to 8.15; 1 RCT, 111 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen for severe IVH (RR 0.63, 95% CI 0.28 to 1.43; 6 RCTs, 544 infants).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and indomethacin for severe IVH (RR 1.10, 95% CI 0.28 to 4.31; 2 RCTs, 112 infants).
Acetaminophen and ibuprofen combination therapy versus ibuprofen alone: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen combination therapy and ibuprofen alone for severe IVH (RR 1.50, 95% CI 0.30 to 7.43; 1 RCT, 24 infants).
Retinopathy of prematurity (ROP)
Six Cochrane Reviews reported on the outcome of ROP. They included the following interventions ( Table 14 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment for any stage of ROP (RR 0.32, 95% CI 0.07 to 1.42; 1 RCT, 47 infants), or for severe ROP (≥ stage 3; RR 0.96, 95% CI 0.06 to 14.43; 1 RCT, 47 infants).
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that there was no evidence of a difference between prolonged and short course of indomethacin for any stage of ROP (RR 1.04, 95% CI 0.57 to 1.88; 3 RCTs, 240 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with indomethacin and expectant management for severe ROP (≥ stage 3; RR 0.30, 95% CI 0.02 to 5.34; 1 RCT, 41 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management for severe ROP (≥ stage 3; RR 0.16, 95% CI 0.01 to 2.93; 2 RCTs, 136 infants).
Ibuprofen
IV ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that there was no evidence of a difference between IV ibuprofen and placebo or no treatment for any stage of ROP (RR 1.19, 95% CI 0.88 to 1.62; 1 RCT, 129 infants), or for severe ROP (≥ stage 3; RR 1.18, 95% CI 0.38 to 3.68; 1 RCT, 129 infants).
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin for any stage of ROP (RR 0.81, 95% CI 0.60 to 1.10; 7 RCTs, 581 infants).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin for any stage of ROP (RD 0.00, 95% CI ‐0.18 to 0.17; 2 RCTs, 71 infants).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen for ROP that required laser treatment (RR 0.59, 95% CI 0.26 to 1.34; 2 RCTs, 172 infants).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen for any stage of ROP (RR 1.00, 95% CI 0.27 to 3.69; 1 RCT, 70 infants), or for severe ROP (≥ stage 3; RR 2.00, 95% CI 0.19 to 21.06; 1 RCT, 70 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with ibuprofen and expectant management for severe ROP (≥ stage 3; RR 1.65, 95% CI 0.51 to 5.31; 1 RCT, 105 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with ibuprofen and expectant management for severe ROP (≥ stage 3; RR 0.80, 95% CI 0.24 to 2.69; 1 RCT, 60 infants).
Echocardiogram‐guided versus standard IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between echocardiogram‐guided and standard IV ibuprofen for ROP that required laser treatment (RR 2.25, 95% CI 0.50 to 10.05; 1 RCT, 49 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion and intermittent bolus of ibuprofen for any stage of ROP (RR 0.68, 95% CI 0.39 to 1.19; 1 RCT, 111 infants), or for severe ROP (≥ stage 3; RR 0.34, 95% CI 0.04 to 3.16: 1 RCT, 111 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen for severe ROP (≥ stage 3; RR 0.43, 95% CI 0.12 to 1.55; 2 RCTs, 191 infants), or for ROP that required laser treatment (RR 0.94, 95% CI 0.48 to 1.85; 3 RCTs, 353 infants).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and indomethacin for severe ROP that required treatment (RR 1.32, 95% CI 0.58 to 2.99; 2 RCTs, 96 infants).
Late acetaminophen versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between late acetaminophen and placebo for ROP that required treatment (RR 3.11, 95% CI 0.34 to 28.09; 1 RCT, 55 infants).
Surgical ligation
Surgical PDA ligation versus medical treatment with indomethacin: the review by Malviya 2013 showed that there was no evidence of a difference between surgical PDA ligation and medical therapy for severe ROP (≥ stage 3; RR 3.8, 95% CI 1.12 to 12.93; 1 RCT, 154 infants).
Duration of hospitalisation (days)
Five Cochrane Reviews reported on duration of hospitalisation. They included the following interventions ( Table 15 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment on the duration of hospitalisation (MD ‐14.30 days, 95% CI ‐51.36 to 22.76; 1 RCT, 44 infants).
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that there was no evidence of a difference between prolonged and short course of indomethacin on the duration of hospitalisation (MD 19.60 days, 95% CI ‐2.99 to 42.19; 1 RCT, 61 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management on the duration of hospitalisation (MD ‐1.00 day, 95% CI ‐12.83 to 10.83; 1 RCT, 44 infants).
Ibuprofen
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin on the duration of hospitalisation (MD ‐0.69 days, 95% CI ‐4.54 to 3.16; 4 RCTs, 368 infants).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin on the duration of hospitalisation (MD 4.55 days, 95% CI ‐3.61 to 12.71; 1 RCT, 83 infants).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen on the duration of hospitalisation (MD 21.00 days, 95% CI ‐1.44 to 43.44; 1 RCT, 70 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that compared to expectant management, very early treatment reduced the duration of hospitalisation (MD ‐6.27 days, 95% CI ‐10.39 to ‐2.14; 2 RCTs, 124 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that compared to ibuprofen, acetaminophen increased the duration of hospitalisation (MD 2.79 days, 95% CI 0.34 to 5.24; 4 RCTs, 361 infants).
Moderate/severe neurodevelopmental disability
Three Cochrane Reviews reported on moderate/severe neurodevelopmental disability. They included the following interventions ( Table 16 ).
Indomethacin
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management for moderate/severe cognitive delay (RR 0.27, 95% CI 0.03 to 2.31; 1 RCT, 79 infants; low‐ certainty evidence); or moderate/severe motor delay (RR 0.54, 95% CI 0.05 to 5.71; 1 RCT, 79 infants; low‐certainty evidence); or moderate/severe language delay (RR 0.54, 95% CI 0.10 to 2.78; 1 RCT, 79 infants; low‐certainty evidence); when assessed at 18 to 24 months.
Ibuprofen
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen for moderate/severe cerebral palsy at 18 to 24 months (RR 1.35, 95% CI 0.24 to 7.48; 1 RCT, 57 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen on the Mental Developmental Index (MDI < 70; RR 1.03, 95% CI 0.41 to 2.59; 1 RCT, 61 infants); on the Psychomotor Developmental Index (MDI < 70; RR 1.03, 95% CI 0.33 to 3.21; 1 RCT, 61 infants); for moderate to severe cerebral palsy (RR 2.07, 95% CI 0.41 to 10.46; 1 RCT, 61 infants); for deafness (RR 0.34, 95% CI 0.01 to 8.13; 1 RCT, 61 infants); or for blindness (RR 0.34, 95% CI 0.01 to 8.13; 1 RCT, 61 infants).
All‐cause mortality
Seven Cochrane Reviews reported on the outcome of mortality. They included the following interventions ( Table 17 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment for all‐cause mortality before hospital discharge (RR 0.78, 95% CI 0.46 to 1.33; 8 RCTs, 314 infants; moderate‐certainty evidence).
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that there was no evidence of a difference between prolonged and short course of indomethacin for mortality (RR 1.36, 95% CI 0.86 to 2.15; 5 RCTs, 431 infants).
Continuous infusion versus intermittent bolus of indomethacin: the review by Görk 2008 showed that there was no evidence of a difference between continuous infusion and intermittent bolus of indomethacin on death during the first 28 days of life (RR 3.95, 95% CI 0.20 to 76.17; 1 RCT, 32 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with indomethacin and expectant management for all‐cause mortality during the hospital stay (RR 0.95, 95% CI 0.45 to 1.99; 3 RCTs, 195 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management for all‐cause mortality during the hospital stay (RR 0.92, 95% CI 0.47 to 1.80; 3 RCTs, 188 infants).
Ibuprofen
Intravenous ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that there was no evidence of a difference between IV ibuprofen and placebo or no treatment for mortality (RR 0.80, 95% CI 0.34 to 1.90; 1 RCT, 136 infants).
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin for all‐cause mortality (RR 0.79, 95% CI 0.54 to 1.17; 10 RCTs, 697 infants); or for neonatal mortality during the first 28 or 30 days of life (RR 1.12, 95% CI 0.59 to 2.11; 4 RCTs, 333 infants).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin for all‐cause mortality (RD ‐0.10, 95% CI ‐0.20 to ‐0.0; 4 RCTs, 165 infants); and for neonatal mortality during the first 28 or 30 days of life (RD ‐0.03, 95% CI ‐0.12 to 0.18; 2 RCTs, 66 infants).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen for neonatal mortality during the first 28 or 30 days of life (RR 1.13, 95% CI 0.5 to 2.55; 1 RCT, 64 infants); or for mortality during the hospital stay (RR 0.83, 95% CI 0.38 to 1.82; 2 RCTs, 188 infants).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen for mortality during the hospital stay (RR 1.02, 95% CI 0.58 to 1.79; 2 RCTs, 155 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with ibuprofen and expectant management for mortality during the hospital stay (RR 0.65, 95% CI 0.28 to 1.50; 3 RCTs, 305 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with ibuprofen and expectant management for mortality during the hospital stay (RR 1.46, 95% CI 0.58 to 3.67; 2 RCTs, 124 infants).
Echocardiogram‐guided versus standard IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between echocardiogram‐guided and standard IV ibuprofen for mortality during the hospital stay (RR 0.56, 95% CI 0.14 to 2.25; 1 RCT, 49 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion and intermittent bolus of ibuprofen for mortality during the hospital stay (RR 1.02, 95% CI 0.07 to 15.87; 1 RCT, 111 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen for mortality during the hospital stay (RR 1.09, 95% CI 0.80 to 1.48; 8 RCTs, 734 infants), or for deaths during the first 28 days of life (RR 1.17, 95% CI 0.43 to 3.20; 1 RCT, 90 infants).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and indomethacin for mortality during the hospital stay (RR 0.86, 95% CI 0.39 to 1.92; 2 RCTs, 114 infants).
Surgical ligation
Surgical PDA ligation versus medical treatment with indomethacin: the review by Malviya 2013 showed that there was no evidence of a difference between surgical PDA ligation and medical therapy for mortality during the hospital stay (RR 0.67, 95% CI 0.34 to 1.31; 1 RCT, 154 infants).
Necrotising enterocolitis (NEC)
Seven Cochrane Reviews reported on the outcome of NEC. They included the following interventions ( Table 18 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment for NEC (≥ Bell stage 2; RR 1.27, 95% CI 0.36 to 4.55; 2 RCTs, 147 infants; low‐certainty evidence).
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that compared to a short course of indomethacin, a prolonged course increased the risk of any stage of NEC (RR 1.87, 95% CI 1.07 to 3.27; 4 RCTs, 310 infants).
Continuous infusion versus intermittent bolus of indomethacin: the review by Görk 2008 showed that there was no evidence of a difference between continuous infusion and intermittent bolus of indomethacin for NEC (≥ Bell stage 2; RR 0.56, 95% CI 0.03 to 12.23; 1 RCT, 22 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with indomethacin and expectant management for NEC (≥ Bell stage 2; RR 1.56, 95% CI 0.28 to 8.80; 2 RCTs, 168 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management for NEC (≥ Bell stage 2; RR 0.80, 95% CI 0.18 to 3.49; 2 RCTs, 188 infants).
Ibuprofen
IV ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that there was no evidence of a difference between IV ibuprofen and placebo or no treatment for any stage of NEC (RR 1.84, 95% CI 0.87 to 3.90; 2 RCTs, 264 infants; moderate‐certainty evidence).
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that compared to indomethacin, ibuprofen reduced the risk of any stage of NEC (RR 0.68, 95% CI 0.49 to 0.94; 18 RCTs, 1292 infants; moderate‐certainty evidence).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that compared to indomethacin, oral ibuprofen reduced the risk of any stage of NEC (RR 0.41, 95% CI 0.23 to 0.73; 7 RCTs, 249 infants; low‐certainty evidence).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen for any stage of NEC (RR 0.86, 95% CI 0.35 to 2.15; 3 RCTs, 236 infants).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen for any stage of NEC (RR 1.00, 95% CI 0.40 to 2.50; 2 RCTs, 130 infants; low‐certainty evidence).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with ibuprofen and expectant management for NEC (≥ Bell stage 2; RR 2.89, 95% CI 0.84 to 9.95; 3 RCTs, 305 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with ibuprofen and expectant management for NEC (≥ Bell stage 2; RR 1.01, 95% CI 0.42 to 2.44; 2 RCTs, 124 infants).
Echocardiogram‐guided versus standard IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between echocardiogram‐guided and standard IV ibuprofen for any stage of NEC (RR 0.38, 95% CI 0.08 to 1.86; 1 RCT, 49 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion and intermittent bolus of ibuprofen for any stage of NEC (RR 0.44, 95% CI 0.12 to 1.60; 1 RCT, 111 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen for NEC (by radiological diagnosis; RR 1.30, 95% CI 0.87 to 1.94; 10 RCTs, 1015 infants; moderate‐certainty evidence).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that compared to indomethacin, acetaminophen reduced the risk of NEC (by radiological diagnosis; RR 0.42, 95% CI 0.19 to 0.96; 4 RCTs, 384 infants; low‐certainty evidence).
Late acetaminophen (initiated on or later than day 14) versus placebo: the review by Jasani 2022 showed that there was no evidence of a difference between late acetaminophen and placebo for NEC (by radiological diagnosis; RR 1.04, 95% CI 0.07 to 15.76; 1 RCT, 55 infants; low‐certainty evidence).
Acetaminophen and ibuprofen combination therapy versus ibuprofen alone: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen combination therapy and ibuprofen alone for NEC (by radiological diagnosis; RR 0.33, 95% CI 0.01 to 7.45; 1 RCT, 24 infants; low‐certainty evidence).
Surgical ligation
Surgical PDA ligation versus medical treatment with indomethacin: the review by Malviya 2013 showed that there was no evidence of a difference between surgical PDA ligation and medical therapy for NEC (by radiological diagnosis; RR 0.95, 95% CI 0.29 to 3.15; 1 RCT, 154 infants).
Gastrointestinal bleeding
Three Cochrane Reviews reported on gastrointestinal bleeding. They included the following interventions ( Table 19 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment for gastrointestinal bleeding (RR 0.33, 95% CI 0.01 to 7.58; 2 RCTs, 119 infants; low‐certainty evidence).
Ibuprofen
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin for gastrointestinal bleeding (RR 0.94, 95% CI 0.55 to 1.61; 7 RCTs, 514 infants).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin for gastrointestinal bleeding (RD 0.07, 95% CI ‐0.05 to 0.18; 3 RCTs, 85 infants).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen for gastrointestinal bleeding (RR 2.89, 95% CI 0.12 to 69.24; 2 RCTs, 172 infants).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen for gastrointestinal bleeding (RR 1.50, 95% CI 0.58 to 3.86; 2 RCTs, 120 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion and intermittent bolus of ibuprofen for gastrointestinal bleeding (RR 0.51, 95% CI 0.16 to 1.59; 1 RCT, 111 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that compared to ibuprofen, acetaminophen reduced gastrointestinal bleeding (RD ‐0.05, 95% CI ‐0.09 to ‐0.02; 7 RCTs, 693 infants).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and indomethacin for gastrointestinal bleeding (RR 0.63, 95% CI 0.32 to 1.25; 3 RCTs, 347 infants).
Gastrointestinal perforation
Four Cochrane Reviews reported on gastrointestinal perforation. They included the following interventions ( Table 20 ).
Indomethacin
Indomethacin versus placebo or no treatment: the review by Evans 2021 showed that there was no evidence of a difference between indomethacin and placebo or no treatment for gastrointestinal perforation (RR 0.98, 95% CI 0.06 to 15.40; 1 RCT, 127 infants).
Ibuprofen
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between ibuprofen and indomethacin for gastrointestinal perforation (RR 0.48, 95% CI 0.20 to 1.14; 5 RCTs, 255 infants).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin for gastrointestinal perforation (RD ‐0.01, 95% CI ‐0.25 to 0.04; 2 RCTs, 62 infants).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen for gastrointestinal perforation (RR 0.32, 95% CI 0.01 to 7.48; 2 RCTs, 134 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between early treatment with ibuprofen and expectant management for gastrointestinal perforation (RR 0.47, 95% CI 0.09 to 2.47; 2 RCTs, 171 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with ibuprofen and expectant management for gastrointestinal perforation (RR 0.50, 95% CI 0.05 to 5.24; 1 RCT, 64 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion and intermittent bolus of ibuprofen for gastrointestinal perforation (RR 2.04, 95% CI 0.19 to 21.82; 1 RCT, 111 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen for gastrointestinal perforation (RR 2.83, 95% CI 0.12 to 67.87; 2 RCTs, 191 infants).
Oliguria
Five Cochrane Reviews reported on the outcome of oliguria. They included the following interventions ( Table 21 ).
Indomethacin
Prolonged versus short course of indomethacin: the review by Herrera 2007 showed that compared to a short course of indomethacin, a prolonged course reduced oliguria (urine output < 1 mL/kg/hour; RR 0.27, 95% CI 0.13 to 0.60; 2 RCTs, 197 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that compared to expectant management, early treatment with indomethacin increased oliguria (urine output < 1 mL/kg/hour; RR 4.59, 95% CI 1.39 to 15.21; 1 RCT, 127 infants).
Very early treatment (initiated within three days) versus expectant management: the review by Mitra 2020a showed that there was no evidence of a difference between very early treatment with indomethacin and expectant management for oliguria (urine output < 1 mL/kg/hour; RR 5.00, 95% CI 0.63 to 39.39; 1 RCT, 44 infants).
Ibuprofen
IV ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that compared to placebo or no treatment, IV ibuprofen increased oliguria (urine output < 1 mL/kg/hour; RR 39.00, 95% CI 2.40 to 633.01; 1 RCT, 134 infants).
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that compared to indomethacin, ibuprofen reduced oliguria (urine output < 1 mL/kg/hour; RR 0.28, 95% CI 0.14 to 0.54; 6 RCTs, 576 infants; moderate‐certainty evidence).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin for oliguria (RD 0.00, 95% CI ‐0.10 to 0.10; 1 RCT, 36 infants).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and IV ibuprofen for oliguria (urine output < 1 mL/kg/hour; RR 0.14, 95% CI 0.01 to 2.66; 4 RCTs, 304 infants; low‐certainty evidence).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen for oliguria defined as urine output < 0.5 mL/kg/hour (RR 1.57, 95% CI 0.44 to 5.63; 2 RCTs, 120 infants; low‐certainty evidence); or oliguria defined as urine output < 1 mL/kg/hour (RR 1.50, 95% CI 0.27 to 8.43; 1 RCT, 70 infants).
Early treatment (initiated within seven days) versus expectant management: the review by Mitra 2020a showed that compared to expectant management, early treatment with ibuprofen increased oliguria (urine output < 1 mL/kg/hour; RR 39.00, 95% CI 2.40 to 633.01; 1 RCT, 134 infants).
Echocardiogram‐guided versus standard IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between echocardiogram‐guided and standard IV ibuprofen for oliguria (urine output < 1 mL/kg/hour; RR 5.31, 95% CI 0.29 to 97.57; 1 RCT, 49 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion and intermittent bolus of ibuprofen for oliguria (urine output < 1 mL/kg/hour; RR 0.51, 95% CI 0.05 to 5.45; 1 RCT, 111 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that compared to ibuprofen, acetaminophen reduced oliguria (urine output < 1 mL/kg/hour; RR 0.47, 95% CI 0.30 to 0.76; 5 RCTs, 608 infants).
Acetaminophen and ibuprofen combination therapy versus ibuprofen alone: the review by Jasani 2022 showed that there was no evidence of a difference between acetaminophen and ibuprofen combination therapy and ibuprofen alone for oliguria (RR 0.50, 95% CI 0.05 to 4.81; 1 RCT, 24 infants).
Adjunct therapies
Dopamine versus control: the review by Barrington 2002 showed that there was no evidence of a difference between the combination of dopamine and indomethacin versus indomethacin alone for oliguria (RR 0.73, 95% CI 0.35 to 1.54; 1 RCT, 33 infants).
Serum/plasma levels of creatinine after treatment
Two Cochrane Reviews reported on this outcome. They included the following interventions ( Table 22 ).
Ibuprofen
IV ibuprofen versus placebo or no treatment: the review by Ohlsson 2020b showed that compared to placebo or no treatment, IV ibuprofen increased serum creatinine post‐treatment (MD 29.17 μmol/L, 95% CI 12.60 to 45.741 RCT, 134 infants).
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that compared to indomethacin, ibuprofen reduced serum creatinine post‐treatment (MD ‐8.12 μmol/L, 95% CI ‐10.81 to ‐5.43; 11 RCTs, 918 infants; low‐certainty evidence).
Oral ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that there was no evidence of a difference between oral ibuprofen and indomethacin for serum creatinine post‐treatment (MD ‐0.51 μmol/L, 95% CI ‐6.04 to 5.01; 5 RCTs, 190 infants; very low‐certainty evidence).
Oral ibuprofen versus IV ibuprofen: the review by Ohlsson 2020b showed that compared to IV ibuprofen, oral ibuprofen reduced serum creatinine post‐treatment (MD ‐22.47 μmol/L, 95% CI ‐32.40 to ‐12.53; 2 RCTs, 170 infants; low certainty evidence).
High‐dose ibuprofen versus standard‐dose ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between high‐dose ibuprofen and standard‐dose ibuprofen for serum creatinine post‐treatment (MD 8.84 μmol/L, 95% CI ‐4.41 to 22.09; 1 RCT, 60 infants).
Echocardiogram‐guided versus standard IV ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between echocardiogram‐guided and standard IV ibuprofen for serum creatinine post‐treatment (MD ‐11.49 μmol/L, 95% CI ‐29.88 to 6.90; 1 RCT, 49 infants).
Continuous infusion versus intermittent bolus of ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between continuous infusion and intermittent bolus of ibuprofen for serum creatinine post‐treatment (MD 2.10 μmol/L, 95% CI ‐4.92 to 9.12; 1 RCT, 111 infants).
Rectal ibuprofen versus oral ibuprofen: the review by Ohlsson 2020b showed that compared to oral ibuprofen, rectal ibuprofen reduced serum creatinine post‐treatment (MD ‐6.18 μmol/L, 95% CI ‐7.22 to ‐5.14; 1 RCT, 72 infants).
Adjunct therapies
Dopamine versus control: the review by Barrington 2002 showed that there was no evidence of a difference between the combination of dopamine and indomethacin versus indomethacin alone for serum creatinine post‐treatment (MD 2.04 μmol/L, 95% CI ‐17.90 to 21.97; 2 RCTs, 59 infants).
Increase in serum/plasma levels of creatinine after treatment
Three Cochrane Reviews reported on this outcome. They included the following interventions ( Table 23 ).
Ibuprofen
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that compared to indomethacin, ibuprofen led to a lower increase in serum creatinine post‐treatment (MD ‐15.91 μmol/L, 95% CI ‐31.78 to ‐0.04; 1 RCT, 21 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that compared to ibuprofen, acetaminophen led to a lower increase in serum creatinine post‐treatment (MD ‐10.61 μmol/L, 95% CI ‐11.49 to ‐8.84; 6 RCTs, 557 infants).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that compared to indomethacin, acetaminophen led to a lower increase in serum creatinine post‐treatment (MD ‐32.71 μmol/L, 95% CI ‐35.36 to ‐30.06; 2 RCTs, 270 infants).
Adjunct therapies
Furosemide versus control: the review by Brion 2001 showed that there was no evidence of a difference between the combination of furosemide and indomethacin versus indomethacin alone for increase in serum creatinine post‐treatment (MD ‐0.88 μmol/L, 95% CI ‐12.38 to 10.61; 3 RCTs, 70 infants).
Serum/plasma levels of bilirubin after treatment
Two Cochrane Reviews reported on this outcome. They included the following interventions ( Table 24 ).
Ibuprofen
Ibuprofen versus indomethacin: the review by Ohlsson 2020b showed that compared to indomethacin, ibuprofen increased serum bilirubin levels post‐treatment (MD 12.65 μmol/L, 95% CI 9.96 to 15.34; 1 RCT, 200 infants).
Rectal ibuprofen versus oral ibuprofen: the review by Ohlsson 2020b showed that there was no evidence of a difference between rectal ibuprofen and oral ibuprofen for serum bilirubin levels post‐treatment (MD 7.01 μmol/L, 95% CI ‐11.23 to 25.25; 1 RCT, 72 infants).
Acetaminophen
Acetaminophen versus ibuprofen: the review by Jasani 2022 showed that compared to ibuprofen, acetaminophen reduced serum bilirubin levels post‐treatment (MD ‐10.56 μmol/L, 95% CI ‐13.16 to ‐7.96; 4 RCTs, 400 infants).
Acetaminophen versus indomethacin: the review by Jasani 2022 showed that compared to indomethacin, acetaminophen increased serum bilirubin levels post‐treatment (MD 1.03 μmol/L, 95% CI 0.13 to 1.93; 1 RCT, 200 infants).
Increase in serum/plasma levels of bilirubin after treatment
No review reported on this outcome.
Subgroup Analyses
None of the reviews provided data on any of our pre‐specified subgroups. | Discussion
Summary of main results
We included 16 Cochrane Reviews (138 randomised controlled trials (RCTs), 11,856 preterm infants) on the management of patent ductus arteriosis (PDA) in preterm infants. The number of trials included in each review ranged from none to 39. Six reviews (N = 4976) reported on prophylactic interventions for the prevention of PDA, and included pharmacological prophylaxis with prostaglandin inhibitor drugs (indomethacin, ibuprofen, and acetaminophen) and prophylactic surgical PDA ligation and non‐pharmacologic interventions (chest shielding during phototherapy and restriction of fluid intake). One review (N = 97) reported on the use of indomethacin for the management of asymptomatic PDA. Nine reviews (N = 6783) reported on interventions for the management of symptomatic PDA, and included pharmacotherapy with prostaglandin inhibitor drugs (indomethacin, ibuprofen and acetaminophen) in various routes and dosages; surgical PDA ligation; and adjunct therapies (use of furosemide and dopamine in conjunction with indomethacin). The certainty of the evidence, when reported by the respective reviews for the primary outcomes for prevention of PDA, ranged from moderate to low, while those for the primary outcomes for treatment of PDA, ranged from high to low.
Interventions for prevention of PDA and related complications in preterm infants
Prophylactic indomethacin probably reduces severe intraventricular haemorrhage (IVH), while it does not appear to affect the composite outcome of death or moderate/severe neurodevelopmental disability. Prophylactic ibuprofen probably marginally reduces severe IVH (moderate‐certainty evidence), while the evidence is very uncertain on the effect of prophylactic acetaminophen on severe IVH. There is no evidence on the effect of either prophylactic ibuprofen or acetaminophen on the composite outcome of death or moderate/severe neurodevelopmental disability. There is a paucity of evidence for any other prophylactic intervention on the primary outcomes of severe IVH and the composite of death or moderate/severe neurodevelopmental disability.
With respect to other patient‐important outcomes, both prophylactic indomethacin and ibuprofen (moderate‐certainty evidence) reduced the need for invasive PDA closure. Necrotising enterocolitis (NEC) appeared to be lower with both prophylactic surgical ligation and fluid restriction. There was no effect of the other prophylactic interventions on any other clinically relevant outcomes, such as mortality or chronic lung disease (CLD).
Interventions for management of asymptomatic PDA in preterm infants
Overall evidence is limited (3 RCTs, 97 infants) for the management of asymptomatic PDA. Treatment of asymptomatic PDA with indomethacin appears to reduce the development of symptomatic PDA post‐treatment. There is no evidence on the effect of asymptomatic PDA treatment on the composite outcome of death or moderate/severe neurodevelopmental disability.
Interventions for management of symptomatic PDA in preterm infants
All available prostaglandin inhibitor drugs appear to be more effective in PDA closure when compared to placebo or no treatment (high‐certainty evidence for indomethacin; moderate‐certainty evidence for ibuprofen; low‐certainty evidence for early administration of acetaminophen). Oral ibuprofen appears to be more effective in PDA closure compared to ibuprofen (moderate‐certainty evidence); and high‐dose ibuprofen appears to be more effective in PDA closure compared to standard‐dose ibuprofen (moderate‐certainty evidence). There was no evidence of any difference in PDA closure effectiveness between the three available prostaglandin inhibitor drugs (low‐ to moderate‐certainty evidence). There is no evidence on the effect of treatment of symptomatic PDA on the composite outcome of death or moderate/severe neurodevelopmental disability.
From a safety perspective, compared to indomethacin administration, NEC appears to be lower with ibuprofen (any route; moderate‐certainty evidence), exclusive oral administration of ibuprofen (low‐certainty evidence), and with acetaminophen (low‐certainty evidence). On the contrary, NEC appears to be more common with a prolonged course of indomethacin versus a shorter course. Oliguria is also higher with use of indomethacin versus ibuprofen (moderate‐certainty evidence), the use of ibuprofen versus either placebo or acetaminophen, and with early pharmacological treatment of PDA, initiated within the first seven days of life versus later treatment.
Overall completeness and applicability of evidence
We found reviews for all our prespecified objectives. However, there was substantial variation in the certainty of available evidence for the different interventions for patient‐important outcomes. For prophylactic interventions, the precision of the estimate of effects is best with indomethacin, while the evidence is limited for ibuprofen and sparse for acetaminophen. Evidence from RCTs does suggest a definite benefit with prophylactic indomethacin, and a probable benefit with prophylactic ibuprofen with a reduction in severe IVH. However, the results should be interpreted with caution, as several of the RCTs contributing to the reviews on prophylactic indomethacin and ibuprofen were conducted more than 20 years ago, when NICU practices were vastly different, including the use of antenatal corticosteroids, approaches to mechanical ventilation, and the use of surfactant. It is unclear whether the treatment effects shown in these trials still apply today to extremely preterm infants at higher risk of severe IVH. From a safety perspective, neither indomethacin nor ibuprofen prophylaxis was shown to increase patient‐important adverse outcomes, such as NEC or gastrointestinal perforation. However, the trials included in the respective reviews did not consider the effect of co‐administration of other drugs that might cause harm. This might be an important consideration for clinicians, especially with the emergence of newer prophylactic therapies, such as prophylactic hydrocortisone. A recent individual patient data (IPD) meta‐analysis of RCTs showed that prophylactic low‐dose hydrocortisone can improve survival without CLD (adjusted odds ratio (OR) 1.48, 95% CI 1.02 to 2.16), however, the concomitant use of prophylactic indomethacin and hydrocortisone increases the risk of gastrointestinal perforation (OR 2.50; 95% CI, 1.33 to 4.69; Shaffer 2019 ). The largest trial contributing to the said IPD meta‐analysis, the PREMILOC trial (N = 1072), failed to demonstrate similar harm in the subgroup of infants who were co‐administered hydrocortisone and ibuprofen (47% of infants enroled in the intervention arm of the trial received ibuprofen ( Baud 2016 )). Therefore, clinicians should weigh the current applicability of existing evidence for benefit against the potential for harm with concomitant use of other medications, while considering the use of prophylactic non‐steroidal anti‐inflammatory drugs (NSAIDs) in preterm infants. Similarly, clinicians should exercise caution while considering non‐pharmacologic interventions, such as prophylactic fluid restriction to prevent a symptomatic PDA, given the trials were conducted between 1980 and 2000 in moderately preterm infants, and therefore, may not be applicable to extremely preterm infants in the current context. Further, clinicians should refrain from extrapolating this evidence to using fluid restriction as a therapeutic option for treatment of symptomatic PDA, given there is no evidence to support the latter.
With respect to treatment of asymptomatic or symptomatic PDA, the availability of RCT evidence is substantially variable, depending on the intervention used. Overall, RCT evidence consistently demonstrates that the use of prostaglandin inhibitor drugs is effective in closing a PDA. Despite effective PDA closure, current evidence fails to demonstrate a benefit of prostaglandin inhibitor drugs for patient‐important clinical outcomes, such as need for invasive PDA closure, CLD, or mortality. However, several study limitations prevent us from drawing firm conclusions on the lack of efficacy of the prostaglandin inhibitor drugs for clinical outcomes. First, there was wide variation in PDA definitions in the included trials, especially the trials for treatment of symptomatic PDA. Symptomatic PDA was defined in most trials based on characteristic clinical signs, along with echocardiographic evidence of an increased PDA shunt volume. However, the trials did not have consistent eligibility criteria, from either a clinical or an echocardiographic standpoint. Further, the most used echocardiographic criteria, the PDA size, and the left atrium to aortic root ratio, have been shown to have poor inter‐rater reliability, and therefore, may represent suboptimal inclusion criteria ( de Freitas Martins 2018 ; Zonnenberg 2012 ). In addition, the trials did not attempt to differentiate between PDAs with moderate versus high shunt volume, based on any clinical or echocardiographic criteria. These drawbacks of existing RCTs may have led us to include a highly heterogeneous population in the meta‐analyses, especially, more mature infants with smaller PDA shunt volumes, in whom spontaneous PDA closure was highly likely to occur. Second, as evident from their wide confidence intervals, the effect estimates for the most important clinical outcomes were imprecise, which failed to provide convincing evidence for an absence of effect on such outcomes. Third, a substantial proportion of infants in the placebo or no treatment group ended up receiving open‐label medical therapy, thereby, likely pulling the effect estimate towards the null. The latter, especially, might be an important reason why effective PDA closure did not necessarily translate into improved longer‐term clinical benefit.
The need for subsequent open‐label therapy, including definitive surgical PDA closure, also highlights the fact that medical therapy, though better than placebo, is by no means a highly effective option for PDA closure. Therefore, most RCTs of medical treatment were essentially trials of drug therapy, rather than the elimination of the PDA shunt. Therefore, despite growing calls for accepting the null hypothesis and abandoning further clinical trials on PDA management, the current evidence underscores the need to clearly establish which PDA shunts, if any, are associated with worse clinical outcomes, and pursue further clinical trials that include only those infants at the highest risk of PDA‐attributable morbidities, and explore highly effective and safe shunt elimination strategies for such clinically important PDA shunts.
Quality of the evidence
The quality of reviews as assessed by the AMSTAR 2 criteria was variable. We only judged two reviews to be of high quality, while five were of low quality, and two of critically low quality ( Table 2 ). Reviews that we judged as critically low quality failed to use a satisfactory technique for assessing the risk of bias in individual studies, in addition to omissions in other critical domains of the AMSTAR 2 criteria. Of note, none of the included reviews provided a rationale for including only randomised controlled trials in their review. This may be associated with Cochrane Neonatal's approach of traditionally including only RCTs in reviews of interventions to obtain the most unbiased estimates of treatment effects. However, in the absence of well‐done RCTs, other study designs, such as large observational studies, may be an important source of evidence, especially related to the safety of the interventions. Further, the majority of the reviews did not explicitly include information on funding sources for the trials. This did have an impact on the quality of the reviews as per the AMSTAR 2 criteria, as full disclosure of any funding is important to ensure that no financial incentive introduced bias ( Lundh 2017 ).
Only five of the Cochrane Reviews assessed the overall certainty of the evidence using GRADE methodology ( Evans 2021 ; Jasani 2022 ; Mitra 2020a ; Ohlsson 2020a ; Ohlsson 2020b ). We did not reassess the certainty of evidence, but summarised the certainty assessed by the respective review authors. With regard to the primary outcomes defined in this overview, the certainty of the evidence was not reported for all available interventions. For PDA prevention, the certainty of the evidence, which was available only for prophylactic ibuprofen for the primary outcome of severe IVH, was deemed to be moderate. The certainty of the evidence was not assessed for interventions for the management of asymptomatic PDA. While for symptomatic PDA treatment, the certainty of the evidence for the primary outcome of failure of PDA closure was available for all available prostaglandin inhibitor drugs. The overall certainty for symptomatic PDA closure was high for indomethacin, moderate for ibuprofen, and moderate‐low for acetaminophen. The most common reason for downgrading the certainty of the evidence was serious risk of bias, followed by imprecision in effect estimates.
Potential biases in the overview process
We are confident that this overview is a comprehensive summary of all currently available Cochrane Reviews on the management of the PDA in preterm infants. We did not apply any date restrictions to the search. Five of the 16 reviews were either first published or updated in the past two years, making this a comprehensive summary of the best available evidence. One potential source of bias is that two of the overview authors are first authors or co‐authors on three of the included reviews. However, quality assessment of the reviews, using the AMSTAR 2 criteria, was carried out in duplicate to minimise any intellectual bias ( Table 2 ).
Agreements and disagreements with other studies or reviews
With respect to prophylactic therapies, the results of this overview largely align with the recently published Cochrane network meta‐analysis by Mitra 2022 . Both the overview and the network meta‐analysis showed that prophylactic indomethacin reduces the risk of severe IVH and the need for surgical PDA closure, increases the risk of oliguria, and likely does not increase the risk of NEC or gastrointestinal perforation. In addition, both the overview and the network meta‐analysis demonstrated that prophylactic ibuprofen also reduces the need for surgical PDA closure, and likely does not increase the risk of NEC or gastrointestinal perforation. However, the network meta‐analysis by Mitra 2022 failed to demonstrate a difference for severe IVH and oliguria with prophylactic ibuprofen, unlike the Ohlsson 2020a review, which showed a marginal reduction in severe IVH, in addition to a definite increase in oliguria. These observed differences in results may be related to corresponding differences in the datasets analysed in the two reviews, as the search for the Ohlsson 2020a review was updated in October 2018, while the search for the Mitra 2022 review was updated in December 2021. However, given the considerable overlap of studies included in the network meta‐analysis by Mitra 2022 and the Ohlsson 2020a review, the more likely rationale for the observed differences in results could be related to differences in analytical methods. While the Ohlsson 2020a review used the traditional Cochrane Neonatal approach of using fixed‐effect meta‐analysis, thereby, generally obtaining more precise estimate of effects, the network meta‐analysis used a Bayesian random‐effects model, which was likely to produce more conservative estimates, especially in the absence of a substantial contribution from the indirect comparisons, thereby, failing to establish differences for the said outcomes. With respect to prophylactic acetaminophen, both reviews failed to draw meaningful conclusions due to overall paucity of evidence. With regard to treatment of symptomatic PDA, the results of this overview align with two previous network meta‐analyses, which both demonstrated that prostaglandin inhibitor drugs were effective in closing a PDA, which however, failed to translate into a clinically meaningful benefit ( Jones 2011 ; Mitra 2018 ).
Overall, our findings generally support the current recommendations from the Canadian Pediatric Society (CPS ( Mitra 2022a )), and the American Academy of Pediatrics (AAP ( Hamrick 2020 )), that include: considering prophylactic indomethacin to prevent severe IVH in high risk extremely preterm infants, and refraining from pharmacotherapy for PDA closure in clinically stable preterm infants, given the lack of clear evidence for benefit, while judiciously weighing the benefits and harms of PDA treatment in clinically unstable, extremely preterm infants, given the overall lack of RCT evidence in this vulnerable population. However, it is important to note that both the CPS and AAP statements suggest considering invasive PDA closure (surgical ligation or percutaneous transcatheter closure) if the PDA remains persistently symptomatic, despite limited RCT evidence on the benefit of invasive PDA closure on clinically relevant outcomes. | Authors' conclusions
| Abstract
Background
Patent ductus arteriosus (PDA) is associated with significant morbidity and mortality in preterm infants. Several non‐pharmacological, pharmacological, and surgical approaches have been explored to prevent or treat a PDA.
Objectives
To summarise Cochrane Neonatal evidence on interventions (pharmacological or surgical) for the prevention of PDA and related complications, and interventions for the management of asymptomatic and symptomatic PDA in preterm infants.
Methods
We searched the Cochrane Database of Systematic Reviews on 20 October 2022 for ongoing and published Cochrane Reviews on the prevention and treatment of PDA in preterm (< 37 weeks' gestation) or low birthweight (< 2500 g) infants. We included all published Cochrane Reviews assessing the following categories of interventions: pharmacological therapy using prostaglandin inhibitor drugs (indomethacin, ibuprofen, and acetaminophen), adjunctive pharmacological interventions, invasive PDA closure procedures, and non‐pharmacological interventions. Two overview authors independently checked the eligibility of the reviews retrieved by the search, and extracted data from the included reviews using a predefined data extraction form. Any disagreements were resolved by discussion with a third overview author. Two overview authors independently assessed the methodological quality of the included reviews using the AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews) tool. We reported the GRADE certainty of evidence as assessed by the respective review authors using summary of findings tables.
Main results
We included 16 Cochrane Reviews, corresponding to 138 randomised clinical trials (RCT) and 11,856 preterm infants, on the prevention and treatment of PDA in preterm infants. One of the 16 reviews had no included studies, and therefore, did not contribute to the results. Six reviews reported on prophylactic interventions for the prevention of PDA and included pharmacological prophylaxis with prostaglandin inhibitor drugs, prophylactic surgical PDA ligation, and non‐pharmacologic interventions (chest shielding during phototherapy and restriction of fluid intake); one review reported on the use of indomethacin for the management of asymptomatic PDA; nine reviews reported on interventions for the management of symptomatic PDA, and included pharmacotherapy with prostaglandin inhibitor drugs in various routes and dosages, surgical PDA ligation, and adjunct therapies (use of furosemide and dopamine in conjunction with indomethacin). The quality of reviews varied. Two reviews were assessed to be high quality, seven reviews were of moderate quality, five of low quality, while two reviews were deemed to be of critically low quality.
For prevention of PDA, prophylactic indomethacin reduces severe intraventricular haemorrhage (IVH; relative risk (RR) 0.66, 95% confidence interval (CI) 0.53 to 0.82; 14 RCTs, 2588 infants), and the need for invasive PDA closure (RR 0.51, 95% CI 0.37 to 0.71; 8 RCTs, 1791 infants), but it does not appear to affect the composite outcome of death or moderate/severe neurodevelopmental disability (RR 1.02, 95% CI 0.90 to 1.15; 3 RCTs, 1491 infants). Prophylactic ibuprofen probably marginally reduces severe IVH (RR 0.67, 95% CI 0.45 to 1.00; 7 RCTs, 925 infants; moderate‐certainty evidence), and the need for invasive PDA closure (RR 0.46, 95% CI 0.22 to 0.96; 7 RCTs, 925 infants; moderate‐certainty evidence). The evidence is very uncertain on the effect of prophylactic acetaminophen on severe IVH (RR 1.09, 95% CI 0.07 to 16.39; 1 RCT, 48 infants). Necrotising enterocolitis (NEC) was lower with both prophylactic surgical ligation (RR 0.25, 95% CI 0.08 to 0.83; 1 RCT, 84 infants), and fluid restriction (RR 0.43, 95% CI 0.21 to 0.87; 4 RCTs, 526 infants).
For treatment of asymptomatic PDA, indomethacin appears to reduce the development of symptomatic PDA post‐treatment (RR 0.36, 95% CI 0.19 to 0.68; 3 RCTs, 97 infants; quality of source review: critically low).
For treatment of symptomatic PDA, all available prostaglandin inhibitor drugs appear to be more effective in closing a PDA than placebo or no treatment (indomethacin: RR 0.30, 95% CI 0.23 to 0.38; 10 RCTs, 654 infants; high‐certainty evidence; ibuprofen: RR 0.62, 95% CI 0.44 to 0.86; 2 RCTs, 206 infants; moderate‐certainty evidence; early administration of acetaminophen: RR 0.35, 95% CI 0.23 to 0.53; 2 RCTs, 127 infants; low‐certainty evidence). Oral ibuprofen appears to be more effective in PDA closure than intravenous (IV) ibuprofen (RR 0.38, 95% CI 0.26 to 0.56; 5 RCTs, 406 infants; moderate‐certainty evidence). High‐dose ibuprofen appears to be more effective in PDA closure than standard‐dose ibuprofen (RR 0.37, 95% CI 0.22 to 0.61; 3 RCTs, 190 infants; moderate‐certainty evidence). With respect to adverse outcomes, compared to indomethacin administration, NEC appears to be lower with ibuprofen (any route; RR 0.68, 95% CI 0.49 to 0.94; 18 RCTs, 1292 infants; moderate‐certainty evidence), oral ibuprofen (RR 0.41, 95% CI 0.23 to 0.73; 7 RCTs, 249 infants; low‐certainty evidence), and with acetaminophen (RR 0.42, 95% CI 0.19 to 0.96; 4 RCTs, 384 infants; low‐certainty evidence). However, NEC appears to be increased with a prolonged course of indomethacin versus a shorter course (RR 1.87, 95% CI 1.07 to 3.27; 4 RCTs, 310 infants).
Authors' conclusions
This overview summarised the evidence from 16 Cochrane Reviews of RCTs regarding the effects of interventions for the prevention and treatment of PDA in preterm infants.
Prophylactic indomethacin reduces severe IVH, but does not appear to affect the composite outcome of death or moderate/severe neurodevelopmental disability. Prophylactic ibuprofen probably marginally reduces severe IVH (moderate‐certainty evidence), while the evidence is very uncertain on the effect of prophylactic acetaminophen on severe IVH. All available prostaglandin inhibitor drugs appear to be effective in symptomatic PDA closure compared to no treatment (high‐certainty evidence for indomethacin; moderate‐certainty evidence for ibuprofen; low‐certainty evidence for early administration of acetaminophen). Oral ibuprofen appears to be more effective in PDA closure than IV ibuprofen (moderate‐certainty evidence). High dose ibuprofen appears to be more effective in PDA closure than standard‐dose ibuprofen (moderate‐certainty evidence).
There are currently two ongoing reviews, one on fluid restriction for symptomatic PDA, and the other on invasive management of PDA in preterm infants.
Plain language summary
Treatments to manage patent ductus arteriosus in premature babies
Review question
What treatments are effective and safe in preventing or treating a common heart condition, called patent ductus arteriosus (PDA) in premature babies?
Background
PDA is a common complication among premature and low birthweight babies. PDA is an open blood vessel channel between the lungs and the heart, which usually closes shortly after birth. In premature and low birthweight babies, the PDA may remain open, and may contribute to life‐threatening complications. We wanted to see what treatments can safely and effectively prevent or treat a PDA and its related problems.
Study characteristics
We included 16 Cochrane Reviews. Out of them, six reviews provided evidence on preventing a PDA with drugs, surgery, or other means that do not involve drugs or surgery. One review provided evidence of treating a PDA before the babies experience symptoms, while the rest of the reviews provided evidence on treating babies who are experiencing symptoms from their PDA, with either drugs or surgery.
Key Results
This overview found that both indomethacin and ibuprofen may reduce severe brain bleeding and the need for PDA surgery, when given to premature babies before they experienced symptoms from a PDA. When babies are experiencing symptoms from a PDA, all available drug therapies, that is, indomethacin, ibuprofen, and acetaminophen (specifically when given early) are effective in closing a PDA. If using ibuprofen therapy, giving the medication by mouth appears to be better than giving it by intravenous route; and higher doses of ibuprofen appears to be more effective in closing a PDA than standard doses.
Certainty of the evidence
According to GRADE (a method to score the certainty of the trials supporting each outcome), the certainty of the evidence varied from very low to high. According to the AMSTAR 2 criteria (a method to rate the quality of reviews), the quality of the included Cochrane Reviews also varied from high to critically low, but were mostly between moderate and low.
How up to date is the search evidence
The search is up to date as of 20 October 2022.
New | Objectives
To summarise Cochrane Neonatal evidence on:
interventions (pharmacological or surgical) for prevention of patent ductus arteriosus and related complications in preterm infants; and interventions (pharmacological or surgical) for management of patent ductus arteriosus in preterm infants, including: interventions for management of asymptomatic patent ductus arteriosus in preterm infants; and interventions for management of symptomatic (haemodynamically significant) patent ductus arteriosus in preterm infants.
History
Protocol first published: Issue 4, 2020 | Acknowledgements
We would like to thank Ms Abbey MacLellan , MD candidate, Dalhousie University, Halifax, Canada, and Mr Austin Cameron, Neonatal Research Coordinator, IWK Health, Halifax, Canada , for their help with data extraction.
The methods section of this review is based on a standard template used by Cochrane Neonatal.
We would like to thank Cochrane Neonatal: Jane Cracknell and Michelle Fiander, Managing Editors; and Roger Soll and Bill McGuire, Co‐coordinating Editors, who provided editorial and administrative support.
We thank Michelle Fiander (Information Specialist) for running the searches.
Menelaos Konstantinidis, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health Toronto provided statistical peer review.
Differences between protocol and review
We made the following changes to the protocol ( Mitra 2020b ).
In this overview, we only included reviews that specifically reported on interventions primarily intended to prevent or treat a PDA, and not all interventions that reported PDA as an outcome. We clarified this under the Types of Interventions section, by adding the following sentence: “In this overview, we specifically included reviews of therapies primarily intended to prevent or treat a PDA”.
Contributions of authors
SM conceived the project.
SM, WdB, DW, and PSS drafted the overview, reviewed all drafts, and approved the final version of the overview.
Sources of support
Internal sources
No sources of support provided
External sources
Vermont Oxford Network, USA Cochrane Neonatal Reviews are produced with support from Vermont Oxford Network, a worldwide collaboration of health professionals dedicated to providing evidence‐based care of the highest quality for newborn infants and their families.
Declarations of interest
SM is an Associate Editor, Cochrane Neonatal Group. However, he had no involvement in the editorial processing of this overview. He has also published medical articles related to the management of PDA in preterm infants.
WdB has published medical articles related to PDA in preterm infants. He was the project leader of the BeNeDuctus trial ( Hundscheid 2023 ), a international, multicentre, randomised non‐inferiority trial of early treatment versus expectant management of patent ductus arteriosus in preterm infants (study protocol: Hundscheid 2018 , and statistical analysis plan: Hundscheid 2021 ).
DW has no conflict of interest to declare.
PSS is an Associate Editor, Cochrane Neonatal Group. However, he had no involvement in the editorial processing of this overview. | CC BY | no | 2024-01-16 23:35:05 | Cochrane Database Syst Rev. 2023 Apr 11; 2023(4):CD013588 | oa_package/65/67/PMC10091483.tar.gz |
PMC10185832 | 37205182 | Introduction
Lipids, one of four broad classes of macromolecules in living organisms, are hydrophobic organic molecules with a diverse variety of important functions in many cellular processes. Some notable functions related to cancer progression include the involvement of lipids in cellular signaling, energy storage, and inflammatory and immune responses ( 1 ) . Accumulation of lipids in the tumor microenvironment (TME) has been shown to promote immune evasion and inflammation ( 2 , 3 ), and lipids are an important source of energy for rapidly proliferating cells ( 3 ).
Lipids also affect the immune system and its components in a variety of ways ( Figure 1 ). Abnormal lipid accumulation in tumors correlates with T-cell dysfunction, T-cell exhaustion, increased proportions of regulatory T cells (Tregs) ( 4 , 5 ) and memory T cells, and increased T-cell recall responses ( 6 ). Lipids also influence macrophage functions, in some cases causing increased plasticity ( 7 , 8 ) and in others leading to decreases in macrophage differentiation that can subsequently enhance tumor growth ( 8 , 9 ). The presence of lipid droplets in tumors has also been linked with the presence of natural killer (NK) cells and increased metastasis ( 10 ). Similarly, triglyceride accumulation in dendritic cells (DCs) has caused downregulation of antigen presentation and increased immune evasion.
This review focuses on the functions of lipids in the immune system and their effects on cancer progression and metastasis. Specifically, this review covers how lipids affect T cells, macrophages, NK cells, and DCs, as well as their roles in immunotherapy, cell cycling, and cancer metastasis, all as a means of prompting interest in novel treatment strategies based on this macromolecule. Also reviewed is the controversial evidence supporting the potential of lipids to serve as biomarkers in cancer treatment and early evidence of their activity in cancer progression. | Edited by: Roger Chammas, University of São Paulo, Brazil
Reviewed by: Avigdor Leftin, GE Healthcare, United States
Ana Patricia Cardoso, University of Porto, Portugal
Alyssa Joy Cozzo, University of North Carolina at Chapel Hill, United States
Lipids are a diverse class of biomolecules that have been implicated in cancer pathophysiology and in an array of immune responses, making them potential targets for improving immune responsiveness. Lipid and lipid oxidation also can affect tumor progression and response to treatment. Although their importance in cellular functions and their potential as cancer biomarkers have been explored, lipids have yet to be extensively investigated as a possible form of cancer therapy. This review explores the role of lipids in cancer pathophysiology and describes how further understanding of these macromolecules could prompt novel treatments for cancer. | Lipids and lipid oxidation in immune cells
T cells
T lymphocytes are an indispensable component of the adaptive immune system, as they mount cell-mediated responses to fight pathogens as well as tumor cells. T cells are of three broad types: helper T cells (CD4 + ), cytotoxic T cells (CD8 + ), and suppressor T cells (e.g., CD4 + Tregs). Like all cells of the immune system, T cells are created in the bone marrow; uniquely, however, they mature in the thymus. The maturation process involves negative selection, in which T cells that are activated by “self” proteins undergo apoptosis. Once the T cells leave the thymus, they are considered to be mature but naïve, meaning they have not encountered corresponding antigens and will remain in the G0 phase of the cell cycle until they do so. The differentiation processes and the effector functions of T cells are fundamentally tied to cellular metabolism. During differentiation, T cells undergo metabolic reprogramming to meet the divergent energy needs of each cell type. This means that understanding the functions of cells at each stage of differentiation will help to reveal the metabolic pathways that are upregulated ( 11 ).
For example, naïve T cells require energy to migrate throughout lymphoid organs with little need for biosynthesis (as they are not proliferating), and resting T cells generate adenosine triphosphate (ATP) as an energy source by fatty acid oxidation (FAO). Activation of T cells leads to their becoming metabolically reprogrammed to a state of anabolic metabolism in which lipid biosynthesis is upregulated (as opposed to lipid oxidation) to allow the cells to divide and proliferate ( 12 ). Activated T cells become proliferative upon encountering infectious agents, and some proportion of those proliferating antigen-specific T cells develop into memory T cells, which persist even after contraction of the antigen-specific effector cells. Like naïve T cells, memory T cells use catabolic metabolism in the form of FAO in addition to oxidative phosphorylation to meet their metabolic demands and fulfill their functions. FAO is crucial for the transition of activated CD8 + T cells into memory T cells; indeed, one study showed that altered expression of genes that regulate FAO was found to correlate with a defect in memory T cell generation ( 13 ). The same study also showed that pharmacologic modulation of FAO could enhance CD8 + T-cell development after vaccination ( 13 ). These and other studies have established a link between lipid metabolism and cellular longevity. CD4 + Tregs, which express the protein Foxp3, are a subset of T cells that are crucial for self-tolerance. Previous studies have linked the pool of colonic Tregs in the gut with the concentration of short-chain fatty acids (FAs) produced by gut bacterial fermentation ( 14 ). Follow-up studies showed that these short-chain FAs expanded Tregs by suppressing JNK1 and p38 pathway signaling ( 15 ), which is crucial for intestinal homeostasis. Short-chain FAs derived from the microbiota also affect CD8 + memory T cells. As noted earlier, FAO is important for the formation of memory T cells but microbiota-derived short-chain FAs in particular were required for optimal recall responses upon antigen re-encounter, as was observed in one study by memory-cell defects in germ-free mice ( 6 ). That same study showed that the memory T cells in mice consuming a high-fiber diet, which increases circulating levels of the short-chain FA butyrate, had significantly higher numbers of effector cells than did control mice ( 6 ). These findings confirm that metabolic substrate availability in the environment has a profound influence on the differentiation and function (or dysfunction) of T cells. Another example of this can be seen in the TME, which is often rich in cholesterol and FAs ( 4 ). Lipid uptake by Tregs in the TME depends largely on FA translocase (CD36), and TregCD36 –/– mice showed a profound loss of intratumoral Tregs ( 16 ). On the other hand, increased CD36 expression on CD8 + tumor-infiltrating lymphocytes (TILs) caused by lipid accumulation in the TME was found to correlate with progressive T-cell dysfunction, thought to be caused by uptake of oxidized low-density lipoprotein by CD36, which induced lipid peroxidation and downstream activation of P38 ( 4 ). Moreover, CD36-mediated uptake of FAs by CD8 + TILs in the TME led to ferroptosis and reduced the production of cytotoxic cytokines ( 17 ).
T-cell proliferation is also affected by long-chain polyunsaturated fatty acids (PUFAs) ( 18 ). The immune response is downregulated as n-3 PUFAs become incorporated within the lipid rafts of the cellular membrane ( 19 ), causing a decrease in pro-inflammatory molecules such as PGE2, TXB2, and LTB4 and a decrease in inflammatory cytokines ( 20 ). Arachidonic acid, an n-6 PUFA, has been linked with decreased production of interleukin (IL)-10 and interferon-gamma (IFNy) and upregulated Treg production, which ultimately dampens the T-cell immune response ( 18 ).
Other compounds that affect T-cell regulation are the so-called specialized proresolving lipid mediators (SPMs) derived from the FA docosahexaenoic acid ( 21 ). In one study, treating human T cells with the SPMs resolvin D1 (RvD1), RvD2, and maresin 1 (Mar1) led to downregulated production of the inflammatory cytokines tumor necrosis factor-alpha (TNFα) and IFNy by CD8 + T cells, as well as downregulated IL-17 in CD4 + cells. Treatment of CD4 + naïve T cells with SPMs also prevented those cells from differentiating into T helper 1 (TH1) and T helper 17 (TH17) cells ( 21 ). These same three SPMs also enhanced immunosuppression by increasing the number of Foxp3 + Tregs relative to control conditions ( 21 ).
Cholesterol and cholesterol derivatives also affect T-cell functions in the TME. In one study, the presence of cholesterol in the TME and in TILs was positively associated with upregulated expression of the immune checkpoint molecules PD-1, 2B4, TIM3, and LAG3 by T cells and T-cell exhaustion ( 5 ).
Macrophages
Macrophages are another important component of the innate immune system with roles in antigen presentation, microbial killing, and regulation of the inflammatory response ( 22 ). One of the most salient features of macrophages is their plasticity; they have been shown to activate into different polarized states based on specific microenvironmental conditions and signals ( 23 ). Broadly, M1 (classically activated) macrophages mediate pro-inflammatory and antitumor immune responses, whereas M2 (alternatively activated) macrophages are generally understood to mediate anti-inflammatory and pro-tumor immune responses ( 24 ). Macrophages that infiltrate tumors, i.e., tumor-associated macrophages (TAMs), can differentiate into either of these subtypes and interact with tumor cells through a variety of signaling molecules ( 25 ). Notably, TAMs are the most abundant immune cell in the TME, and they regulate a multitude of pro-tumor effects including angiogenesis, metastasis, and immune evasion ( 8 , 26 , 27 ). Although TAMs also exhibit a variety of metabolic alterations, their reprogrammed lipid metabolism has particularly important effects on their activity.
Generally, increased lipid accumulation in TAMs is positively associated with their differentiation and function, especially because FAO is responsible for the downstream transcriptional regulation of several genes pertinent to TAM activity ( 7 , 8 ). By extension, several studies have shown that lipid accumulation in TAMs corresponds with tumor progression in a variety of cancer types ( 28 ). In gastric cancer, lipid accumulation in TAMs resulted in the upregulated expression of phosphoinositide 3-kinase (PI3K) -γ, which induced M2-like polarization ( 29 ). Single-cell RNA sequencing of a subpopulation of TAMs in a mouse model of lung metastases from mammary tumors identified several clusters of macrophages, among them lipid-associated macrophages; these lipid-associated cells were present in greater numbers than in nontumor-bearing controls ( 30 ). In human hepatocellular carcinoma, lipid-associated macrophages were found to express TREM2 (a protein with immunosuppressive effects), which correlated with Treg recruitment and poor prognosis ( 31 ). Evidently, lipid-loaded TAMs have been broadly investigated for their pro-tumor effects, but their metabolic profiles may offer deeper insights into how they can specifically remodel the tumor immune microenvironment.
Notably, M1 macrophages primarily use glycolysis to produce energy, whereas M2 macrophages predominantly rely on FAO ( 32 ). Several studies have been conducted to clarify how FAO specifically regulates TAMs, but they have produced conflicting findings. For example, in one study, inhibiting FAO in TAMs was found to block the pro-tumor effects of M2-like TAMs in a hepatocellular carcinoma model ( 33 ), and in another, the chemical inhibitor of FAO etomoxir was found to prevent colon cancer–associated macrophages from polarizing into the M2 subtype ( 34 ). In contrast, another study used genetic ablation of carnitine palmitoyltransferase-2 (CP2) to inhibit FAO in macrophages, but those macrophages retained their M2-like markers ( 35 ). Some studies have investigated specific pathways or intermediates in lipid metabolism for their relevance to TAM functioning. For example, studies of the role of peroxisome proliferator-activated receptor gamma (PPARγ) in TAM activity have had contradictory results. A deficiency of receptor-interacting protein kinase 3 in hepatocellular carcinoma models inhibited PPAR cleavage, which increased FAO and induced M2 polarization ( 36 ). On the other hand, another group found that a binding event between truncated PPARγ and medium-chain acyl-CoA dehydrogenase in mitochondria led to the inhibition of FAO, the accumulation of lipid droplets, and the subsequent differentiation of TAMs. These investigators reasoned that inhibiting the caspase-1-catalyzed cleavage of PPARγ and promoting FAO may actually exhaust lipid droplets, reduce TAM differentiation, and attenuate tumor growth ( 8 , 9 ). In short, further research is needed to clarify the details of how FAO acts to regulate TAM activity,
Interactions between TAMs and cancer cells are known to be regulated by monoacylglycerol lipase (MGLL), a key enzyme involved in the metabolism of triacylglycerol. One study involving colon cancer models found that a deficiency of MGLL resulted in lipid overload in TAMs and specifically promoted CB2/TLR4-dependent macrophage activation, polarizing TAMs into an M2-like phenotype and suppressing the activity of CD8 + T cells in the TME ( 37 ). On the other hand, overexpression of MGLL by cancer cells can promote the generation of free FAs, which are an important nutrient source for tumors. Another research group used a nanoplatform to simultaneously block MGLL activity and suppress CB2 expression, which reduced free FA generation and repolarized TAMs into the M1 phenotype ( 38 ). More broadly, the metabolism of long-chain FAs, particularly unsaturated FAs, has been shown to promote the immunosuppressive phenotype of TAMs. These findings suggest that enriched lipid droplets may be optimal targets for reversing immunosuppression and enhancing antitumor effects on a metabolic level ( 34 ).
TAMs exhibiting altered lipid metabolism can also influence tumor progression through specific molecular factors. One group screened TAM subpopulations among colorectal cancer cells and found that TAMs with lower expression of abhydrolase domain-containing 5 (ABHD5), a coactivator for adipose triglyceride lipase, had higher levels of reactive oxygen species and matrix metalloproteins, which facilitate invasiveness ( 39 ). TAMs with enhanced lipid uptake have also been shown to express higher levels of genes for pro-tumor molecules such as Arg1 , Vegf , and Hif1a ( 7 , 40 ). A broad range of lipid metabolites in the TME are also responsible for regulating TAM functioning. As an example, 27-hydroxycholesterol (27HC), a primary metabolite of cholesterol, was found to be highly expressed in TAMs and positively correlated with breast cancer metastasis ( 41 ). 27HC was also shown to mediate IL-4-induced M2 macrophage polarization and promoted the recruitment of immunosuppressive monocytes ( 42 ). Prostaglandin E2 (PGE2), a mediator of inflammation, has been extensively researched for its multifaceted effects on macrophage activity in cancer. In bladder, nasopharyngeal, and melanoma cancer models, PGE2 has been implicated in promoting the differentiation of myeloid-derived suppressor cells, inhibiting the phagocytic activity of TAMs, and enhancing angiogenesis ( 40 , 43 – 45 ). Another metabolite, 5-lipoxygenase, was highly produced by hypoxic ovarian cancer cells and promoted TAM infiltration via upregulation of MMP-7 ( 46 ). In murine melanoma models, β-glucosylceramide induced an endoplasmic reticulum stress response, triggering a STAT3-mediated signal cascade that promoted the expression of immunosuppressive genes and supported a pro-tumor phenotype in TAMs ( 47 ).
As noted earlier, SPMs can also influence macrophage function. The SPM MaR1 acts as a ligand for retinoic acid–related orphan receptor alpha (RORα), which increases macrophage M2 polarity ( 48 ). RORα is a nuclear receptor that regulates inflammatory pathways and lipid metabolism in cells ( 49 ). Activation of RORα by MaR1 also decreases M1 polarization and upregulates anti-inflammatory cytokines ( 48 ). Corroborating findings with human monocyte lines further support a role of RORα in inflammation; specifically, knocking out RORα in these cell lines led to upregulation of TNF and IL-1B, and RNA sequencing showed that RORα knockout led to activation of cells similar to M1 macrophages ( 49 ). SPMs seem to circumscribe or localize inflammation to prevent the development of chronic inflammation ( 21 , 50 ). Collectively, the various lipid metabolites present in the TME dynamically influence TAM activity and may represent potential therapeutic targets for cancer on an immunological basis.
Natural killer cells
NK cells are part of the innate immune system that are broadly understood to control tumors and various microbial infections by mitigating the spread of the invading agents and subsequent tissue damage ( 51 ). In the context of cancer, NK cells can kill target cells directly through the secretion of perforins and granzymes, which is a hallmark of their cytotoxic activity. They can also produce a host of cytokines and chemokines that facilitate an antitumor immune response ( 52 ). However, the phenotype of naive NK cells, including the expression of several activating and inhibiting receptors, can be significantly altered by malignant cells. Tumor-associated natural killer (TANK) cells display these altered phenotypes, resulting either in functional anergy or reduced cytotoxicity ( 53 ). For instance, in tumor specimens from patients with non-small cell lung carcinoma, intratumoral NK cells exhibited reduced NK-cell receptor expression, impaired degranulation capacity, and decreased IFN-γ production ( 54 ). Similar observations in patients with colorectal cancer further implicate TANK cells in cancer progression ( 55 ). TANK cells display a phenotype that mechanistically explains many of their pro-tumor activities; however, the complex interaction between the TME and TANK cells also raises questions about the interplay between TANK cell metabolism—specifically, lipid metabolism—and cancer. Although lipid metabolism in the context of the tumor immune microenvironment has been extensively studied, its specific role in regulating TANK activity remains poorly understood.
One emerging focal point for investigations of lipid metabolism in NK cells is mammalian target of rapamycin (mTOR), a serine/threonine kinase with a central role in signaling lipid metabolism in NK cells, particularly those stimulated by IL-15 ( 56 ). One study found that continuous treatment of NK cells with IL-15 exhausted their spare respiratory capacity via FAO reduction and resulted in reduced tumor control; however, treating these NK cells with an mTOR inhibitor rescued their functioning ( 57 ). Moreover, obesity was found to result in PPAR-driven lipid accumulation in NK cells, and the administration of FAs along with PPARα/δ agonists (i.e., mimicking obesity) blocked mTOR-regulated glycolysis. Consequently, NK cells trafficked less cytotoxic machinery to the NK cell–tumor synapse and exhibited decreased antitumor activity ( 58 ).
Although few studies have directly examined the effects of altered lipid metabolism on TANK activity, one key investigation in the field of perioperative immunology characterized the effect of NK-cell lipid accumulation on postoperative metastasis. That study of both preclinical murine models and human colorectal cancer patient samples revealed that lipid accumulation in NK cells contributed to metastasis compared with controls ( 10 ). In the murine models, the scavenger receptors MSR1, CD36, and CD68 (all crucial for intracellular lipid transport and uptake) were all significantly upregulated ( 10 , 56 ). Also, the lipid-laden TANK cells displayed profoundly reduced tumor-killing ability both ex vivo and in vivo . The human specimen studies further demonstrated accumulation of FAs in NK cells from 1 to 3 days after surgery; these lipid-laden TANK cells also expressed higher CD36 levels and reduced granzyme B and perforin expression ( 10 ). Collectively, the findings from this study suggest that lipid accumulation and dysregulated lipid metabolism in TANK cells participate in facilitating metastasis. Nevertheless, further research into specific lipid metabolic alterations in TANK cells, and research into the interactions between certain lipid metabolites (e.g., PGE2, 27HC) in the TME and TANK cells, is needed to better understand the complex relationship between TANK cells and cancer.
Dendritic cells
Dendritic cells (DCs) are another type of innate immune cells with key functions in antigen presentation that subsequently connect the innate and adaptive immune systems. The three general types of DCs are plasmacytoid DCs, conventional DCs, or monocyte-derived inflammatory DCs ( 59 ). Plasmacytoid DCs specialize in antiviral immunity and create high levels of type I IFNs. Conventional DCs are efficient in antigen presentation and support helper T cells. Monocyte-derived DCs are involved in antigen presentation in cases of infection, inflammation, and cancer and have key roles in cancer immunotherapy; monocyte-derived DCs are also affected by lipid accumulation ( 59 ).
Findings from one study of a mouse ovarian cancer model showed that abnormal lipid accumulation led to impairment in antigen presentation by DCs ( 60 ). Moreover, this model was also notable for accumulation of lipid peroxidation products, which in turn led to endoplasmic reticulum stress protein folding and subsequent activation of X-box-binding protein 1 (XBP1). Upon isolation of DCs from mice with tumors and healthy mice, cross-presentation of antigens was found to be downregulated in the mice with tumors, and accumulation of lipid droplets led to activated XBP1 ( 60 ). Other evidence has implicated PGE2 in both downregulation and upregulation of the immunoregulatory activity of DCs, depending on their stage of maturation ( 61 ).. Immature DCs have pro-inflammatory effects in the presence of PGE2, resulting in upregulation of IL-6, TNFα, and IL-1β. In mature DCs, PGE2 leads to IL-10 production, which has anti-inflammatory effects. PGE2 can also inhibit the release of the inflammatory cytokines CCL3 and CCL4 in DCs, which results in downregulation of activated DCs ( 61 ).
Lipid accumulation in DCs has also been noted in mouse models of EL-4 lymphoma stained with the lipid marker BODIPY 493/503 ( 62 ). T-cell proliferation in mice with normal-lipid-bearing DCs was compared with that of mice with high-lipid-bearing DCs; the high-lipid DCs showed lower binding affinity with antibody and reduced antigen presentation compared with the normal-lipid DCs. Conversely, manipulation of lipid regulation in tumor cells by using the acetyl-CoA carboxylase inhibitor 5-(tetradecycloxy)-2-furoic acid restored the ability of DCs to stimulate T cells, which led to increased antitumor activity ( 62 ). In another model of radiation-induced thymic lymphoma, triacylglycerol serum levels were found to be higher than in control mice ( 63 ), which led to decreases in the secretion of IL-12p40, IL-1, and IFN-y by the DCs, thereby downregulating their antigen-presenting function. In another study of specimens from patients with lung cancer, BODIPY 650/665 fluorescence staining revealed elevated triglyceride accumulation, particularly in patients with stage III or IV lung cancer ( 64 ). Evaluation of the mixed lymphocyte reaction in these samples (in which T cells are incubated with antigen-presenting cells such as DCs showed that the reaction was weakest in samples from patients with stage IV lung cancer, and that this low level correlated with higher triglyceride levels in the DCs ( 64 ). Lipids, therefore, can represent a candidate for immunotherapy targets. The pro-tumor effects of lipids on immune cells are summarized in Figure 2 .
Lipids and lipid oxidation in cancer cell proliferation and survival
The uncontrolled proliferation of cancer cells necessitates accumulation of a significant quantity of lipids not only for energy but also to make up the membranes and organelles of these cells. These lipids can be acquired from exogenous sources or synthesized endogenously through lipogenic pathways ( 65 , 66 ). Understanding how lipids affects tumor cell growth and cell death provides further insight as to how this broad class of biomolecules can be used as a target in cancer treatments.
Cancer that develops in areas of the body with large adipocyte stores tends to have higher amounts of circulating FAs; this higher circulating FAs level plus the nearby adipose tissue influence the metabolism of the cancer ( 65 ). As noted previously in this review, depletion of glucose stores during rapid proliferation and growth of tumors leads to areas of nutrient deprivation within those tumors, meaning that TILs rely on oxidative phosphorylation to maintain their energy levels and effector functions ( 67 ). When oxygen supplies are limited, the expression of hypoxia-inducible factor 1α enhances glycolysis ( 65 ). A lack of both oxygen and glucose may further shift the metabolic profile of TILs to increased FA uptake and catabolism to maintain effector function, with the balance between FAO and ketone body metabolism depending on the extent of oxygen deprivation ( 67 ).
As discussed throughout this review, although FA oxidation is a highly efficient form of ATP generation for cancer cells, lipids can also influence proliferation and migration in ways other than providing an energy source ( 65 ). As an example, cancer-cell proliferation can also be enhanced by cancer-associated fibroblasts, which transfer lipids to cancer cells through ectosomes. Other examples focus on the interactions between breast tumor cells and lipids, given the large amounts of adipose tissue surrounding breast tumors, with one group studying the “parasitic” relationship of breast cancer cells with adipocytes and lipid stores. That study showed that co-culturing breast cancer cells with adipocytes led to activation of lipolysis within the adipocytes, resulting in the release of FAs into the extracellular space that are then consumed by the cancer cells, fueling both the proliferation and migration of the cancer cells ( 68 ). Breast cancer cells have also been shown to respond to the lipolysis of adipose cells by increasing their expression of carnitine palmitoyltransferase 1A, which is the rate-limiting enzyme for long-chain FA transport into the mitochondria for FAO ( 69 , 70 ). Activation of adipocytes by the nearby cancer cells also leads to the secretion of higher levels of proinflammatory cytokines such as IL-6 ( 71 ).
Cancer cells also take up lipids and their building blocks (FAs) to fuel their dissemination and resistance to therapy ( 72 ). One way in which this occurs is by the FA scavenger receptor CD36, which can bind and internalize long-chain FAs, lipoproteins, thrombospondin-1, and other pathogen-associated molecules ( 73 ).
Lipid metabolism can also induce cell death by changing the permeability of the cell membrane and by activating various enzymes involved in cell death, such as caspases. Changes in membrane permeability also influence ferroptosis ( 72 ), a type of cell death that is induced by the iron-dependent peroxidation of polyunsaturated FAs in membranes ( 72 ). The process of ferroptosis breaks down membrane integrity, which leads to the death of the cell. Ferroptosis is triggered in iron-rich environments such as blood and in cancer cells, which thus must regulate their membrane lipid composition to survive during dissemination. Although increased membrane lipid saturation can lead to endoplasmic reticulum stress and apoptosis, membrane lipids that contain large amounts of polyunsaturated FAs are more likely to sensitize the cells to lipid peroxidation and ferroptosis ( 73 ).
Lipid metabolism in cancer progression and angiogenesis
Overview of lipid metabolism
Lipid metabolism is the biosynthesis and degradation of lipids in the cell. Lipids can be either consumed or synthesized de novo in the liver or adipose tissue ( 74 ). Understanding the mechanism behind lipogenesis and FAO can provide insight on possible therapeutic targets that regulate lipid metabolism ( Figure 3 ).
Fatty acid synthesis occurs primarily in the liver and adipose tissues using excess glucose and amino acids ( 74 ). Acetyl Co-A is an intermediate in the tricarboxylic acid cycle (TCA) that is used in fatty acid synthesis. Acetyl-CoA is combined with oxaloacetate (OAA) to form citrate and leaves the mitochondria through the citrate-malate antiport to the cytoplasm where fatty acid synthesis occurs ( 74 ). Once in the cytoplasm, ATP citrate lyase (ACL) cleaves the citrate back into acetyl-CoA and OAA. Acetyl-CoA is activated to malonyl CoA by acetyl CoA carboxylase. Malonyl CoA is then used to form FA via fatty acid synthase (FAS).
Triacylglycerols (TAGs) are another important class of lipids used to store excess fat ( 74 ). TAG synthesis takes place in the smooth endoplasmic reticulum (ER) of adipose tissue and hepatocytes. Biosynthesis of TAGs begins with glycerol-3-phosphate (G3P) which is created in the liver by glycerol kinase or through reduction of dihydroxyacetone phosphate (DHAP), a glycolytic intermediate, by glycerol-3-phosphate dehydrogenase (GPDH) ( 74 ). Lysophosphatidic acid (LPA) is then produced through an acylation reaction catalyzed by sn-1-glycerol-3-phosphate acyltransferase (GPAT). LPA is then used to produce phosphatidic acid (PA) by acyl-CoA:1-acylglycerol-3-phosphate acyltransferase (AGPAT). PA is hydrolyzed to form diacylglycerol (DAG) by PA phosphatase (PAP) and DAG is subsequently esterified to produce TAG via DAG acetyltransferase (DGAT) ( 75 ).
In order to utilize the energy stored in lipids, cells undergo FAO ( 74 ). FAO primarily occurs in the mitochondria of hepatocytes. FAO begins with the conversion of fatty acyl-CoA by acyl-CoA synthetase (ACS). Fatty Co-A is then translocated into the mitochondria via the carnitine shuttle consisting of carnitine palmitoyl transferase I (CPT I) and CPT II. CPT I catalyzes the conversion of fatty acyl Co-A to acylcarnitine which is then used to reform fatty acyl-CoA via CPT II inside the mitochondria. Fatty acyl Co-A undergoes an oxidation reaction to produce trans enoyl Co-A catalyzed by acyl Co-A dehydrogenase (ACAD). A hydration reaction follows in which trans enoyl Co-A is used to create β hydroxyacyl Co-A via enoyl Co-A hydratase (ECH). β hydroxyacyl Co-A is then oxidized by β hydroxyacyl dehydrogenase (HADH) to produce β ketoacyl Co-A. β ketoacyl Co-A then undergoes a thiolysis reaction via ketoacyl-CoA thiolase (KAT) to form acetyl Co-A which can be used in the TCA ( 74 ).
The metabolism of lipids is highly regulated in the body. PPARα, PPARγ, SREBPs and carbohydrate response element binding protein (ChREBP) are key transcription factors that modulate FA synthesis in the body ( 76 ). PPARα is activated by FAs and reduces triacylglycerol levels in the blood via upregulation of lipoprotein lipase activity and FAO ( 76 ). PPARγ is involved in adipose tissues and contributes to increase triacylglycerol synthesis and lipid accumulation ( 77 ). Lipid metabolism can also be regulated in response to glucose via ChREBP ( 78 ). SREBPs regulate lipid metabolism by controlling the expression of enzymes required for lipogenesis ( 79 ). Given their influence on lipid metabolism, these transcription factors can serve as potential targets in lipid mediated immunotherapies.
Role of lipid metabolism in cancer
As discussed in the previous section, lipids and their metabolism have significant roles in immune function and responsiveness. Lipids are also essential components of cell membranes and are important in cellular processes such as signaling, energy storage, and immune system function, meaning that lipids and lipid metabolism can influence the effectiveness of immunotherapies. As one example, having non-small cell lung cancer with a high mutation burden in the lipid metabolism pathway has been linked with better response to immune checkpoint therapy and prolonged progression-free survival ( 80 ). Another group found that T-cell senescence caused by tumor cells or Tregs in the TME could be reversed by reprogramming lipid metabolism. Specifically, unbalanced lipid metabolism related to senescence was found to elevate group IV A phospholipase A2, pharmacologic inhibition of which enhanced antitumor immunity in melanoma and breast cancer mouse models treated with adoptive T-cell transfer therapy ( 81 ).
Cancer cells can also “hijack” metabolic pathways to meet the increased demand for energy. For example, the conversion of FAs to phospholipids provides signals that activate proteins and bind to G protein-coupled receptors; this process enhances the proliferation, survival, and migration of malignant cells to establish distant metastases ( 82 ). As noted previously, malignant cells also consume FAs to sustain energy and promote their survival and use lipids to support their cellular membranes as well ( 82 ).
Other groups studying FA metabolism in cancer found that overexpression of acyl-CoA synthetase and stearoyl-CoA desaturase-1 prompted the epithelial-to-mesenchymal transition in colorectal cancer cells ( 83 ). Moreover, adipose tissues can release free FAs and secrete growth factors and cytokines after lipolysis. Indeed, in one study, co-culture of ovarian cancer cells with adipocytes led to activated lipolysis and released free FAs which in turn contributed to tumor-cell proliferation and migration ( 84 ). Thus, malignant cells can promote their survival and metastasis via lipid metabolism. Upregulation of lipogenic enzymes such as lysophosphatidic acid in various types of cancer cells has also been found to promote the growth of those cells ( 85 ).
Lipids and cancer metastasis
The appearance of metastatic disease carries a poor prognosis for patients with cancer ( 86 ). Lipid accumulation and increased lipid production in cancer cells have been shown to increase metastasis. Different therapies to address FA-synthesizing enzymes have been explored in attempts to mitigate the tumor-cell migration. This topic remains largely unexplored, and additional research is needed to better understand the mechanism of by which lipids influence metastasis and how they affect therapies.
Two enzymes with key roles in lipid metabolism, FA synthase and monoacylglycerol lipase, participate in lipid synthesis ( 87 , 88 ). In one study involving a model of prostate cancer in BALB/c mice, the metastatic potential of the cancer cells was found to increase when either enzyme was expressed in the presence of FA-binding protein 5. Specifically, expression of FA synthase and monoacylglycerol lipase led to increased prostate cancer cell migration and invasion. Conversely, treating these cells with C75, a FA synthase inhibitor, led to decreased migration and invasion compared with the control ( 89 ). Another enzyme involved in synthesizing triglycerides has also been linked with increased metastasis in gastric cancer. Specifically, mice fed a high-fat diet showed overexpression of diaglycerol acyl transferase 2 (DGAT2); when those mice were implanted with gastric cancer cells, the overexpression of DGAT2 led to increased peritoneal metastasis. Conversely, treating the mice with the DGAT2 inhibitor PF-06424439 suppressed mesenteric metastasis ( 86 ). Another series of experiments with an ovarian cancer model showed that overexpression of FA-binding protein 4 (FABP4) enhanced cancer cell proliferation via transfer of FAs from adipocytes ( 90 , 91 ),; conversely, downregulation of FABP4 led to the formation of fewer metastatic nodules ( 90 ). Another group of lipids, the eicosanoids, have also been linked with inflammation and cancer progression. In a mouse model of colorectal cancer, exposure to the eicosanoid PGE2 increased the number of cancer stem cells and resulted in increased liver metastasis, which was found in mechanistic studies to be due to activation of nuclear factor κB in the EP4-MAPK and EP4-PI3K-Akt signaling pathways ( 92 ). Another type of eicosanoid, leukotrienes, have been implicated in priming the TME towards inflammatory (premetastatic) conditions ( 93 ); another group showed that leukotriene treatment promoted the epithelial-to-mesenchymal transition, thereby enhancing the capacity of the cells to migrate and metastasize ( 94 ).
The effects of lipids on cancer progression can also be assessed by studying transcription factors responsible for lipid production. Sterol regulatory element-binding protein 1 (SREBP-1) is a nuclear transcription factor responsible for the regulation of cholesterol, FA, and phospholipid synthesis. SREBP-1 has been shown to promote the gene transcription for three enzymes: FA synthase, acetyl-CoA carboxylase, and 3-hydroxy-3-methylglutaryl-CoA reductase. SREBP-1 is an important regulator of hepatocellular carcinoma; tissue samples from patients with hepatocellular carcinoma with upregulated SREBP-1 were associated with poor outcomes, and corresponding in vitro experiments showed that downregulation of SREBP-1 led to increased numbers of apoptotic cells and inhibited cell proliferation ( 95 ). These investigators concluded that SREBP-1 could be a potential therapeutic target in hepatocellular carcinoma.
Lipids in cancer immunotherapy
Lipids as adjuvants
Lipids have essential functions in cancer immunotherapy and can contribute directly or indirectly to therapy outcomes. Lipids are often used in cancer immunotherapy as adjuvants, that is, substances that augment an immune response by enhancing tumor-associated antigen presentation or activating antigen-presenting cells. One example of this is the use of adjuvants with tumor-associated-antigen subunit–based vaccines that elicit only weak immune response on their own. Among the many available cancer vaccine adjuvants are TLR4 and TLR7/8 agonists, which induce robust activation of antigen-presenting cells, CD4 + and CD8 + T cells, and NK cells, and shift the TME toward an inflammatory state through the expression of cytokines and chemokines ( 96 ). Moreover, lipid adjuvants can be combined with prophylactic or therapeutic cancer vaccines to enhance the effectiveness of those vaccines. One example is monophosphoryl lipid A, a modified form of a lipid presents in an endotoxin from Gram-negative bacteria that activates TLR4 and stimulates an inflammatory response ( 97 ). The adjuvant AS04, which contains monophosphoryl lipid A and alum, has been used successfully in Cervarix, a vaccine to prevent human papillomavirus (HPV) -16 and -18 –associated cervical cancer. The inclusion of the ASO4 adjuvant in this vaccine has been shown to evoke a more robust immune response in vaccinated people ( 98 ). Other synthetic lipid adjuvants such as 3M-052, GLA-SE, CRX-527, Ono-4007, OM-174, and DT-5461 have also been developed and applied for this purpose ( 99 – 103 ). Despite their somewhat limited clinical efficacy on their own, incorporating lipids as adjuvants in cancer therapy has promising potential, especially for tumors of low immunogenicity. Further study is required to improve their effectiveness and define their mechanistic effects on immunotherapies.
Lipids as vehicles
Lipids are also used in nanoparticle form, including liposomes, solid lipid nanoparticles, and nanostructured lipid carriers, as a drug delivery vehicle to facilitate the transport of therapeutic agents to cancer cells and enhance their effectiveness. Coating drugs with lipid nanoparticles can increase drug bioavailability, biocompatibility, and biodegradability; reduce side effects; enable controlled release and extended periods in circulation; offer protection from chemical or enzymatic degradation; avoid the hepatic first-pass effect; and bypass the blood-brain barrier. Liposomes or lipid nanoparticles can carry both hydrophilic and lipophilic drugs, including immunomodulators, which can be loaded by mixing, conjugating, or encapsulating them into lipid particles to increase their efficacy. These nanoparticles can be administered via various routes, including topical, oral, parenteral, ocular, pulmonary, and intracranial ( 104 ). Further, coupling antibodies to the surface of the liposomes, thereby creating immunoliposomes, can enable and increase the specificity of targeted therapies ( 105 ).
Lipid nanoparticles are widely used to enhance antitumor responses by increasing the immunogenic effects of cancer immunotherapy. One example is resiquimod, a hydrophobic agonist of the Toll-like receptor (TLR) -7/8 used as an adjuvant in topical preparations for skin carcinomas; it has also been made more water-soluble by combining it with lipid particles and administering the compound systemically, to increase the efficacy of immunotherapy ( 106 ). Another group also showed that a liposomal preparation of resiquimod improved the adjuvant’s pharmacokinetics and prolonged its retention time in the blood, which also improved the effectiveness of immunotherapy ( 107 ). CpG oligodeoxynucleotides are another type of immune stimulator and vaccine adjuvant that act as a TLR9 agonist; their effectiveness can be increased by combining them with a liposomal nanoparticle carrier to enhance their delivery to macrophages and other immune cells after the initiation of antitumor response ( 108 ). Mifamurtide is a liposomal formulation of an immunomodulator used to treat osteosarcoma; it contains bacterial cell wall peptides that trigger the innate immune system to modulate an antitumor effect via NOD2 receptors ( 109 ).
Another novel way to use lipid nanoparticles in cancer immunotherapy is by loading them with tumor-specific or tumor-associated antigens to develop cancer vaccines. However, even though encasing these antigenic peptides in liposomes can reduces their degradation, increase their presentation to antigen-presenting cells, and stimulate CD4 + T-cell response to tumors, generating peptide vaccines is expensive and time-consuming. Therefore, mRNA vaccines have emerged as a better alternative because they are cheaper, well-tolerated, faster to produce on a large scale, and easier to combine with liposomes ( 110 ). After an mRNA vaccine is injected, cells take up the mRNA and begins the translation of desired proteins in the cytoplasm. These proteins in turn are split into peptides and then enter the MHC presentation cascade. Lipid nanoparticles can drastically enhance the effectiveness of mRNA vaccines by facilitating uptake of the mRNA and offering protection from degradation ( 111 ). The advent of mRNA vaccines encapsulated in lipid particles has led to many clinical trials for various types of cancer, including melanoma, glioblastoma ovarian, breast, gastrointestinal, genitourinary, hepatocellular, head and neck cancer, and lymphoma ( 112 ). Promising results from some of these trials provide the impetus for further investigation.
The role of lipids in immune responsiveness
Lipids are essential components of the cell membrane and play a role in critical cellular processes, including signaling, energy storage, and immune system function. In the context of cancer immune therapy, lipids play critical roles in immune responsiveness and immune cells’ lipid metabolism in the tumor microenvironment is reprogrammed according to their unique needs and survival adaptations by increasing lipid uptake or de novo lipid synthesis. Lipid uptake is facilitated via transport proteins, including CD36, fatty acid transport proteins (FATPs), FABP or low-density lipoprotein receptors (LDLR) ( 113 ). For instance, intratumoral Tregs are shown to alter their lipid metabolism to increase their survival via the CD36- PPAR-β axis for metabolic adaptation to the tumor microenvironment by enhancing fatty acid transport and mitochondrial fitness. Targeting CD36 induces selective intratumoral Treg apoptosis due to metabolic stress in the microenvironment and contributes to immune checkpoint therapy ( 16 ). In another study, CD36 also facilitates ferroptosis, a regulated cell death mediated by iron-dependent lipid peroxidation in CD8+ T cells and reduces cytotoxic cytokine production. Blocking CD36 on CD8+ increases antitumor response and immune checkpoint therapy effect ( 17 ). It is also shown that fatty acid binder protein 2 (FATP2) is responsible for reprogramming pathological neutrophils called myeloid-derived suppressor cells (MDSc) via upregulation of arachidonic acid metabolism synthesis of PGE2. Pharmacologically targeting FATP2 diminished the evasive effect of MDSCs and tumor growth by reducing reactive oxygen species (ROS) and PD-L1 expression on tumor-infiltrating CD8+ T-cells ( 114 , 115 ). Targeting de novo lipid synthesis is also promising in cancer immunotherapy to improve immune responsiveness. Tregs in the tumor microenvironment are responsible for immune evasion and it is shown that SREBs are responsible for de novo lipid synthesis and are required for the functional integrity of Tregs ( 116 ). Tregs also push M2 macrophage polarization and increase the M2 macrophage’s metabolic fitness, mitochondrial integrity and survival via de novo lipid synthesis. Targeting M2 macrophage survival by blocking de novo fatty acid synthesis via SREBP1 inhibitors in Tregs improves antitumor immunity and the efficacy of immune checkpoint therapy ( 117 ). Furthermore, elevated fatty acid synthase (FASN), a crucial enzyme in de novo lipid synthesis, confers more aggressive phenotypes to ovarian cancer. It is reported that high FASN expression diminishes tumor-infiltrating dendritic cells’ antigen-presenting capacity and blunts T cell-dependent antitumor immunity in mouse models. Adding FASN inhibitors partially restores immune response ( 118 ). Another study also showed supporting findings that lipid accumulation in dendritic cells causes functional dysfunction of dendritic cells that leads to immune evasion and pharmacological normalization of lipid levels via acetyl CoA carboxylase inhibitor augment cancer vaccine efficacy by restoring dendritic cell function ( 63 ).
It is shown that PD-1 expression in T cells, which is an exhaustion marker, alters T cell metabolism by increasing lipid metabolism and fatty acid oxidation ( 119 ). Fatty acid oxidation is also enhanced in Tumor-infiltrating MDSCs and FAO inhibition reduces the inhibitory effect of MDSCs and decreases their inhibitory cytokine productions ( 120 ). In melanoma models, it is shown that paracrine signaling enhances fatty acid oxidation via the β-catenin/PPAR-γ pathway in dendritic cells and induces the tolerization of the local dendritic cells. Blocking fatty acid oxidation with etomoxir reverses this immune-tolerant environment in the melanoma model and increases the immune checkpoint therapy effect ( 121 ). On the other hand, it is reported that exhausted CD8+ T cells enhance fatty acid catabolism to keep their function. Furthermore, augmenting fatty acid metabolism with peroxisome proliferator-activated receptor PPAR-α signaling activator agonist fenofibrate improves CD8+ function in vitro melanoma models and synergizes with immune checkpoint therapeutic effect ( 67 ).
In summary, promising evidence of correlations between lipids and cancer progression and metastasis is tantalizing, but much more research is needed to elucidate the mechanisms underlying these observations and apply them to anticancer therapy. Lipids are known to have profound effects on the immune system, and thus are candidate targets in immunotherapy to address cancer progression. However, lipids are also an important component of normal cellular functions, and targeting this biomolecule will require considerably deeper understanding of the mechanisms by which the immune cells are affected. The contradictory findings obtained to date on lipids and cancer progression could be attributable to the diversity of lipids, variations in different cellular contexts, and whether the experiments were conducted in vitro or in vivo . Further understanding of this important biomolecule can, it is hoped, prompt the development of novel lipid-targeted treatment approaches for cancer.
Author contributions
LD and MC conceptualized and developed this review. LD, HC, TR, SG, SN, TV, and HB collected, analyzed, and interpreted the relevant literature. JW and MC critically reviewed the manuscript. All authors contributed to the article and approved the submitted version. | Acknowledgments
The authors thank Christine F. Wogan, MS, ELS, for providing editing support during the preparation of this manuscript. Figures created with BioRender.com.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | CC BY | no | 2024-01-16 23:36:46 | Front Oncol. 2023 May 2; 13:1187279 | oa_package/78/35/PMC10185832.tar.gz |
||||
PMC10318005 | 37258583 | Introduction
The prevalence of diabetes mellitus (DM) is increasing rapidly worldwide. The number of people with DM worldwide is over 425 million and is expected to reach 700 million by 2045 1 . DM is strongly associated with cardiovascular disease (CVD). Among CV complications, heart failure (HF) increases significantly in DM patients compared with non-DM patients 2 – 4 . Worse still, DM is associated with an increased risk of overall mortality in HF 5 . Although DM is a crucial risk factor for HF, the effectiveness of glycemic control in preventing or managing HF has not yet been proven. Some blood-glucose-lowering drugs, such as insulin, thiazolidinedione (TZD), and dipeptidyl peptidase-4 (DPP-4) inhibitors, increase the risk of hospitalization for HF 6 . Therefore, the US Food and Drug Administration (FDA) requires cardiovascular outcome trials (CVOTs) for all candidate DM drugs 7 . Surprisingly, significant cardioprotective effects were demonstrated in two major trials of sodium-glucose cotransporter (SGLT) 2 inhibitors, empagliflozin (EMPA) and dapagliflozin 8 , 9 . In addition, sotagliflozin (SOTA), the first reported dual SGLT1/2 inhibitor, significantly reduced the risk of HF 10 .
SGLTs exist in two major isoforms: SGLT1 and SGLT2 11 . SGLT2 is strongly expressed in the renal proximal tubule and plays a crucial role in glucose reabsorption 11 , 12 . Inhibition of SGLT2 effectively reduces blood glucose by decreasing glucose reabsorption in the renal proximal tubule and increasing urinary glucose excretion. SGLT1 is mainly expressed in the small intestine, contributing to glucose uptake 13 . Dual SGLT1/2 inhibitors provide a practical hypoglycemic effect by reducing glucose absorption in the small intestine and glucose reabsorption from the proximal renal tubule through SGLT1 and SGLT2 inhibition, respectively 14 . Despite the significant cardioprotective effects of these inhibitors, it has not yet been determined whether SGLT2 inhibitors or dual SGLT1/2 inhibitors provide more effective cardiovascular protection.
Several recent studies have focused on the sodium-hydrogen exchanger 1 (NHE1) in the myocardium as an off-target ligand of SGLT2 inhibitors 15 – 17 . NHE1 is a transporter that exchanges H + and Na + and maintains intracellular pH homeostasis. Upregulation of NHE1 is found in the ventricular tissue of patients with HF 18 , and experimental HF models showed that selective inhibition of NHE1 improves cardiac function by suppressing fibrosis and hypertrophy 19 . However, it has not yet been confirmed whether SOTA can target NHE1 as other SGLT2 inhibitors do.
We have developed a zebrafish DM combined with HF with reduced ejection fraction (DM-HFrEF) model 20 . Zebrafish have the benefits of high fertility, cost-effectiveness, and physiological similarity to humans 21 . In particular, pancreatic β-cells and the heart are structurally and genetically similar to those of humans. Zebrafish larvae have transparent bodies; therefore, blood flow and cardiac contraction can be directly observed under a microscope. Moreover, zebrafish larvae are 3 to 5 mm in length and are suited for arraying into 96- and even 384-well plates, which are suitable for mass screening of survival after drug treatment. Here, we evaluated the effect of EMPA and SOTA in the DM-HFrEF zebrafish model and the role of NHE1 inhibition in the action of the two drugs. | Materials and methods
Zebrafish maintenance
Adult zebrafish ( Danio rerio ) were maintained at 28 °C on a 14:10 h light-dark cycle in an automatic circulating tank system and fed Artemia 2 times a day. Embryos were raised at 28 °C in egg water (60 μg/ml ocean salts, Sigma‒Aldrich, St. Louis, MO, USA), and experiments were performed on the hatched zebrafish embryos from 3 days post-fertilization (dpf) to 9.5 dpf. If the survival rate of zebrafish larvae at 24 hours post-fertilization (hpf) was less than 80%, that zebrafish larvae were not used in the experiment. We used wild-type (WT) zebrafish and a transgenic ( Tg, myl7:EGFP ) zebrafish strain expressing enhanced green fluorescent protein ( EGFP ) with cardiac myosin light chain 7 ( myl7 ) in the myocardium 22 . All animal experiments and husbandry procedures were approved by the Institutional Animal Care and Use Committee (IACUC) of Seoul National University (accession no. SNU-200310-1), and all experiments were conducted in accordance with the National Institutes of Health (NIH) Guide for the Care and Use of Laboratory Animals 23 . Zebrafish larvae were anesthetized by immersion in 0.016% tricaine solution (MS-222, Sigma‒Aldrich, St. Louis, MO, USA) for 5 min. Zebrafish larvae were euthanized by the hypothermic shock method, in which they were exposed to ice-cold water for at least 20 min, in accordance with American Veterinary Medical Association (AVMA) guidelines 24 .
Production of the DM-HFrEF zebrafish model
The DM-HFrEF zebrafish model was generated as described in our previous study 20 . First, a DM-like condition was induced in zebrafish larvae using a combination of D-glucose (GLU, Sigma‒Aldrich, St. Louis, MO, USA) and streptozotocin (STZ, Sigma‒Aldrich, St. Louis, MO, USA). At 3 dpf, zebrafish larvae were immersed in egg water containing 40 mM GLU. D-mannitol (MAN, Sigma‒Aldrich, St. Louis, MO, USA) was used as an osmotic control for GLU. At 4 dpf, 50 μg/ml STZ was added in the dark, and the larvae were incubated for 2 h. At 5 dpf, HF was induced by treatment with terfenadine (TER, Sigma‒Aldrich, St. Louis, MO, USA), a potassium channel blocker that induces HF in zebrafish 20 , 25 , 26 . EMPA or SOTA (Cayman Chemical, Ann Arbor, MI, USA) was administered according to the respective concentrations at 5 dpf (Supplementary Table 1 ), and analyses were performed after more than 24 h of incubation (Supplementary Fig. 1 ).
Quantitative real-time polymerase chain reaction (qRT‒PCR)
For qRT‒PCR analysis, euthanized zebrafish larvae were washed twice with phosphate-buffered saline (PBS), and total cellular RNA was extracted using QIAzol Lysis Reagent (Qiagen, Hilden, Germany). The extracted total cellular RNA was reverse transcribed using amfiRivert cDNA Synthesis Premix (GenDEPOT, Katy, TX, USA) according to the manufacturer’s instructions. Using the SYBR Green PCR kit (GenDEPOT, Katy, TX, USA) and the StepOnePlus Real-Time PCR System (Applied Biosystems, Waltham, MA, USA), qRT‒PCR was performed on the prepared cDNA. The following primers were used: nppb (forward: 5′-CAT GGG TGT TTT AAA GTT TCT CC-3′, reverse: 5′-CTT CAA TAT TTG CCG CCT TTA C-3′) and 18 S ribosomal RNA ( rRNA ) (forward: 5′-TCG CTA GTT GGC ATC GTT TAT G-3′, reverse: 5′-CGG AGG TTC GAA GAC GAT CA-3′). All samples were normalized using the 2 -ddCt method, with 18S rRNA as the housekeeping gene.
Cardiac morphology and ventricular contractility
Tg (myl7:EGFP) zebrafish larvae were anesthetized, and the beating heart was observed under a DMI6000B inverted fluorescence microscope (Leica Microsystems, Wetzlar, Germany). The heart of each individual zebrafish larva was imaged for 30 s and analyzed. Cardiac contractility was estimated as ventricular fractional shortening (vFS) by calculating the ventricular dimension (VD) at end-systole (VD s ) and end-diastole (VD d ). The formula was as follows: vFS = (VD d – VD s ) / VD d × 100
Blood flow and cardiac contraction irregularity
After the zebrafish larvae were anesthetized, blood flow data were obtained by real-time recording of blood flow in the dorsal aorta close to the heart for 30 s using the MicroZebraLab system (ViewPoint, Civrieux, France). Cardiac contraction irregularity was estimated as the standard deviation (SD) of the beat-to-beat interval, calculated based on blood flow data using MATLAB (MathWorks, Natick, MA, USA).
Locomotion and survival analysis
The locomotion of zebrafish was analyzed using DanioVision and EthoVision XT (Noldus, Wageningen, Netherlands). Zebrafish larvae were individually placed in square 96-well plates with 200 μL of the egg water as a medium, and their movements were tracked and recorded for 5 min using DanioVision. During locomotion monitoring, motion was induced by stimulation with a tapping device every 30 s. Zebrafish locomotion was estimated by movement distance and duration, analyzed by EthoVision XT.
Kaplan–Meier survival analysis was used for survival analysis. Zebrafish larvae were transferred to a 96-well plate, one per well. Survival was observed using a microscope every 12 h until 9.5 dpf.
Molecular docking analysis
A protein structure prediction model for zebrafish NHE1 was prepared using the AlphaFold Protein Structure Database, developed by DeepMind (London, UK) and EMBL-EBI (Cambridgeshire, UK) 27 . The 3D structures of GLU, EMPA, SOTA, and cariporide (CARI) for molecular docking analysis were obtained from PubChem. AutoDock Vina was used to analyze ligand binding sites and binding energy in zebrafish NHE1 28 .
Drug affinity responsive target stability (DARTS)
The DARTS assay is an experimental method for identifying and studying protein‒ligand interactions 29 . We performed a DARTS assay to experimentally demonstrate the binding of SGLT inhibitors to zebrafish NHE1. More than 500 zebrafish larvae were euthanized and reacted with 0.2% trypsin EDTA and collagenase to separate them into single cells 30 . The protein was extracted by gently lysing the cells using M-PER (Thermo Fisher Scientific, Waltham, MA, USA). DMSO, EMPA, SOTA or CARI (Sigma‒Aldrich, St. Louis, MO, USA) was reacted with the prepared proteins at room temperature (RT) for 1 h. The reacted proteins were treated with pronase (Sigma‒Aldrich, St. Louis, MO, USA) for 30 min to induce proteolysis. The amount of each protein was confirmed by immunoblotting using antibodies against NHE1 (Thermo Fisher Scientific, Waltham, MA, USA) and glyceraldehyde-3-phosphate dehydrogenase (GAPDH, Abcam, Cambridge, UK).
Measurement of intracellular H + , Na + and Ca 2+ concentrations
To measure intracellular H + , Na + , and Ca 2+ concentrations, we used H9C2 cardiomyocytes cultured with 10% fetal bovine serum (FBS) in Dulbecco’s modified Eagle’s medium (DMEM, GenDEPOT, Katy, TX, USA) in black 96-well plates (Corning, New York, USA). To apply high glucose (HG) stimulation, cultured H9C2 cells were incubated with or without the addition of 40 mM D-glucose solution for 24 h after replacement of the medium with low-glucose DMEM (GenDEPOT, Katy, TX, USA). After treatment with EMPA, SOTA, or CARI for 2 or 24 h, cells were stained for intracellular ions. Staining for H + , Na + or Ca 2+ was performed using pHrodo Red AM (Thermo Fisher Scientific, Waltham, MA, USA), SBFI AM (Abcam, Cambridge, UK), or Fluo-4 AM (Thermo Fisher Scientific, Waltham, MA, USA), respectively, according to the manufacturer’s instructions. Staining was performed using a live cell imaging solution (LCIS, Thermo Fisher Scientific, Waltham, MA, USA) containing HEPES. After being washed with LCIS, the cultured cells were incubated at 37 °C for 30 min with pHrodo Red AM or Fluo-4 AM or at 37 °C for 60 min with SBFI AM. After the cells were stained with pHrodo Red AM, SBFI AM or Fluo-4 AM, fluorescence was measured using a SPARK multimode microplate reader (TECAN, Männedorf, Switzerland) at 560/580 nm, 340/500 nm, and 494/516 nm, respectively.
Statistical analysis
All data are presented as the mean ± SD. Statistical analyses were performed with GraphPad Prism 8 (GraphPad Software, San Diego, CA, USA). Data were analyzed using Student’s t–test or the Mann–Whitney U–test to compare two groups or using one–way analysis of variance (ANOVA) followed by Tukey’s post hoc test to compare more than two groups. The Kaplan–Meier method with the log-rank test was used for survival analysis. | Results
Treatment with empagliflozin or sotagliflozin improved survival and locomotion in DM-HFrEF zebrafish
We first investigated the viability of DM-HFrEF zebrafish after treatment with various concentrations of EMPA or SOTA to determine whether these drugs increased the survival rate. The DM-HFrEF model had significantly reduced survival at 8 and 9 dpf (Supplementary Fig. 2 ), whereas the groups treated with 0.2, 1, and 5 μM EMPA or SOTA had significantly increased survival at 8 and 9 dpf (Fig. 1 ). Interestingly, a significant increase in survival was observed with 25 μM EMPA (Fig. 1a, b ) but not with 25 μM SOTA (Fig. 1c, d ).
Next, we evaluated the changes in locomotion after treatment with various concentrations of EMPA or SOTA. In the DM-HFrEF zebrafish model, locomotion was further reduced after 24 h of TER treatment compared to the locomotion of the non-DM and DM-only groups. EMPA or SOTA treatment significantly preserved locomotion (Fig. 2a ). In particular, 1 and 5 μM EMPA or SOTA significantly improved movement distance, whereas 0.2 and 25 μM EMPA or SOTA did not (Fig. 2b ). Movement duration showed a similar trend. A significant increase was observed in the treatment groups given 5 μM EMPA or SOTA, whereas no significant difference was observed in the treatment groups given 0.2, 1 and 25 μM EMPA or SOTA (Fig. 2c ).
When we evaluated the gross morphology of the zebrafish, we observed no significant difference in morphology between the groups, regardless of DM or HF conditions (Fig. 3a–d ). In addition, morphological abnormalities were not observed in DM-HFrEF zebrafish treated with 0.2, 1, 5 and 25 μM EMPA or 0.2, 1 and 5 μM SOTA (Fig. 3e–k ). Pericardial edema was observed in zebrafish larvae treated with 25 μM SOTA, and interestingly, a marked uninflated swim bladder was observed in these larvae compared to those in other groups (Fig. 3l and Supplementary Fig. 3 ). An uninflated swim bladder caused by high-molarity SOTA was observed not only in the DM-HFrEF zebrafish model but also in the non-DM zebrafish model of HF (Supplementary Fig. 4a, b ).
Treatment with empagliflozin or sotagliflozin improved cardiac function in a DM-HFrEF zebrafish model
Next, we compared the cardioprotective effects of EMPA with those of SOTA. Either EMPA or SOTA treatment preserved cardiac contractile functions in the DM-HFrEF zebrafish model, in which a marked decrease in cardiac functions was observed compared to non-DM or DM-only models (Supplementary Fig. 5 ).
Ventricular contractility is estimated by the vFS parameter. There was no difference in vFS between non-DM and DM-only zebrafish, but a significant decrease in vFS was observed in DM-HFrEF zebrafish compared to DM-only zebrafish (Supplementary Fig. 5a, b ). Treatment with various concentrations of EMPA or SOTA significantly enhanced the vFS of DM-HFrEF zebrafish. Treatment with 0.2–5 μM of both drugs significantly improved vFS, although no significant change was observed at 0.04 μM (Fig. 4a, b and Supplementary Movie 1 ). Notably, the vFS-preserving effect of EMPA peaked at 5 μM, whereas that of SOTA peaked at 1 μM (Fig. 4b ). There was also no significant difference between groups in terms of heart morphology (Fig. 4a and Supplementary Movie 1 ).
The heart is a regular and constantly beating organ, and irregular contraction is a hallmark of HF. The SD of the beat-to-beat interval was calculated to quantify irregular contractions. More irregular contractions were observed in DM-HFrEF zebrafish than in the non-DM or DM-only group (Supplementary Fig. 5c ). The most pronounced increase in the SD of the beat-to-beat intervals was observed in the DM-HFrEF zebrafish (Supplementary Fig. 5d ). Treatment with various concentrations of EMPA or SOTA significantly suppressed irregular contractions in DM-HFrEF zebrafish (Fig. 4c ). The SD of the beat-to-beat intervals in the DM-HFrEF zebrafish treated with 0.2–5 μM EMPA or SOTA was significantly preserved, but the same was not true with 0.04 μM EMPA or SOTA treatment (Fig. 4d ).
In addition, nppb , a gene encoding the zebrafish form of the HF biomarker B-type natriuretic peptide, was markedly increased in DM-HFrEF zebrafish, whereas it was significantly decreased in both the EMPA- and SOTA-treated groups (Supplementary Fig. 6a ). EMPA and SOTA treatments did not affect the expression of ins , a preproinsulin gene, and pck1 , a phosphoenolpyruvate carboxykinase (PEPCK) 1 gene involved in gluconeogenesis, in contrast to the improvements in survival, locomotion, and cardiac function (Supplementary Fig. 6b, c ).
Both empagliflozin and sotagliflozin bound structurally to zebrafish NHE1 and inhibited its function
We performed in silico analysis of EMPA and SOTA binding to the predicted structural model of zebrafish NHE1. EMPA, SOTA, and the selective NHE1 inhibitor CARI were bound to the same site in zebrafish NHE1 (Fig. 5a and Supplementary Fig. 7 ). In addition, the binding affinities of EMPA, SOTA, and CARI were measured at −7.8, −7.2, and −6.1 kcal/mol, respectively; all three had higher binding affinities than the negative control, GLU (−4.7 kcal/mol, Fig. 5b ).
We then compared the binding of EMPA and SOTA to zebrafish NHE1 using a DARTS assay in vitro (Fig. 5c ). Pronase treatment resulted in the rapid proteolytic degradation of NHE1 in DMSO (VEH)-treated zebrafish protein, whereas proteolytic protection was observed in zebrafish protein reacted with EMPA, SOTA, or CARI (Fig. 5d, e ). The NHE1 band intensity in the groups that were reacted with EMPA, SOTA, or CARI was significantly higher than that in the VEH group (Fig. 5e ). However, GAPDH, which was used as a loading control, was consistently proteolyzed by pronase regardless of treatment with EMPA, SOTA, or CARI (Fig. 5d, f ).
Finally, we confirmed the functional inhibition of NHE1 by EMPA and SOTA in vitro. The activation of NHE1 exchanges intracellular H + with extracellular Na + , and the influx of Na + is again exchanged with Ca 2+ through the sodium-calcium exchanger (NCX), resulting in an increase in intracellular Na + , and Ca 2+ 31 . We evaluated whether these ion concentrations were changed by EMPA or SOTA in cardiomyocytes under high glucose (HG) conditions. Under HG conditions, intracellular Na + and Ca 2+ were increased compared to low glucose (LG) conditions, but treatment with EMPA, SOTA or CARI suppressed the changes in the intracellular Na + and Ca 2+ concentrations caused by HG (Fig. 5g, h ). Similarly, treatment with EMPA, SOTA or CARI suppressed the slight concentration change in intracellular H + mediated by HG, but there was no significant difference (Supplementary Fig. 8 ). These reductions were observed at both 2 h and 24 h after EMPA, SOTA or CARI treatment (Fig. 5g, h ). The inhibitory effect on changes in intracellular Na + and Ca 2+ by EMPA or SOTA treatment showed a concentration-dependent tendency, and statistical significance was observed at 5 μM (Supplementary Fig. 9a, c, d ). In particular, the intracellular Na + concentration of cells treated with SOTA showed a significant difference from 0.04 to 5 μM SOTA (Supplementary Fig. 9b ). After CARI pre-treatment, cardiomyocytes were treated with EMPA or SOTA, and intracellular Na + and Ca 2+ concentrations were measured. As a result, significant differences in intracellular Na + and Ca 2+ were not observed between the CARI-only group and the EMPA post-treatment group. In the SOTA post-treatment group, no significant results were observed for intracellular Ca 2+ , but the intracellular Na + concentration decreased significantly in that group compared to the other two groups (Fig. 5i, j ). | Discussion
This study provides new insights into the protective effects of EMPA, a highly selective SGLT2 inhibitor, and SOTA, a dual SGLT1/2 inhibitor, against DM-HFrEF. First, at the same molarity, EMPA and SOTA exerted similar contraction regularity-improving, locomotion-preserving, and survival-promoting effects; EMPA was slightly superior to SOTA overall, but SOTA was superior to EMPA in preserving cardiac contractility. Moreover, the expected significant additive cardioprotective effect of SOTA was not observed in the DM-HFrEF zebrafish model. Second, the morphological abnormality and sharp decrease in survival rate observed in the high-dose SOTA-treated group imply the possibility of side effects of SOTA in zebrafish larvae. Third, both EMPA and SOTA inhibited NHE1 structurally and functionally, which may be the main mechanism underlying their cardioprotective effect.
Newly developed diabetes drugs, including SGLT2 and dual SGLT1/2 inhibitors, provide an effective blood-glucose-lowering effect 11 , 14 . These inhibitors have an excellent cardioprotective effect in addition to their use as diabetes drugs. According to the EMPA-REG and SOLOIST trials, both drugs dramatically reduced hospitalization for HF and overall mortality in patients with DM 8 , 10 . In addition, in experiments using an animal model of DM, treatment with EMPA or SOTA preserved cardiac function by inhibiting myocardial fibrosis, hypertrophy, and inflammation 32 , 33 . However, studies comparing the cardioprotective effects of EMPA and SOTA are still insufficient. We compared the beneficial effects of these two drugs for the first time, focusing on their cardioprotective effects. The results of this study using the DM-HFrEF zebrafish model show that treatment with each of these drugs has a remarkable and similar cardioprotective effect and survival-promoting effect.
SOTA is the first dual SGLT1/2 inhibitor with high selectivity for both SGLT1 and SGLT2 14 . SGLT1 is expressed in the myocardium 34 . SGLT1 in cardiomyocytes contributes to glucose uptake and plays an essential role in pathological heart conditions 35 . SGLT1 inhibition helps decrease myocardial hypertrophy and fibrosis 36 . As such, SOTA is expected to provide a cardioprotective effect by the same mechanism as SGLT2 inhibitors, in addition to a cardioprotective effect through SGLT1 inhibition, therefore offering a higher cardioprotective effect than single-specificity SGLT2 inhibitors. However, in our study, the two inhibitors conferred similar cardioprotective effects and survival rate improvements at various molarities (0.2–5 μM). The maximal cardiac effects of the two drugs were similar. A slight difference between the two drugs was observed in vFS, one of the variables used herein to evaluate cardiac function. SOTA reached its maximum effect at a lower concentration than EMPA, but the difference was not significant. Although SOTA inhibits SGLT1, the similarly confirmed cardioprotective effects of the two drugs suggest that NHE1 inhibition rather than SGLT1 inhibition is the main mechanism behind this effect in the DM-HFrEF zebrafish model. However, further studies are needed to analyze the contribution of NHE1 and SGLT1 inhibition to cardiac function protection. In clinical practice, the doses of these drugs are 10 and 25 mg EMPA or 200 mg SOTA based on clinical trials for DM patients 37 , 38 . Despite the clinical use of much higher doses of SOTA than EMPA, our study results show that both drugs provide a significant cardioprotective effect even at low molarities. In addition, high-molarity SOTA significantly decreased the survival rate compared to the same molarity of EMPA, and pericardial edema and an uninflated swim bladder were observed. These results suggest that treatment with a high molarity of SOTA may have side effects in zebrafish. Furthermore, although the swim bladder in zebrafish is evolutionarily homologous to the mammalian lung 39 , it is unclear whether SOTA-induced swim bladder abnormalities in zebrafish predict corresponding lung toxicity in mammals.
As SGLT2 is not expressed in cardiomyocytes, several studies focusing on the mechanism of the cardioprotective effect of SGLT2 inhibitors have focused on NHE1 as an off-target ligand of SGLT2 inhibitors 15 , 16 . Induction of NHE1 expression and activation is increased by DM-related stimuli 40 . Upregulation of NHE1 is found not only in patients with DM but also in the ventricular tissue of patients with HF 18 . In addition, selective inhibition of NHE1 improved cardiac function by inhibiting fibrosis and cardiac hypertrophy in an experimental HF model 19 . These reports suggest that NHE1 is a molecule that plays an essential role in the pathogenesis of DM-HFrEF and has potential as a novel target of therapeutic strategies for DM-HFrEF; however, this mechanism remains controversial 16 , 17 . We showed that both EMPA and SOTA structurally bind to NHE1 and inhibit its functions both in silico and in vitro. The possibility of binding was suggested by a molecular docking assay and a DARTS assay, and the possibility of inhibition was shown by measuring changes in intracellular Na + and Ca 2+ . In particular, inhibitor competition assays of CARI and EMPA provided clear evidence that EMPA inhibits NHE1, and the results for CARI and SOTA may be related to other mechanisms involved in SGLT1 inhibition. These results not only corroborate several previous studies 15 , 16 but also provide the first evidence that SOTA, a dual SGLT1/2 inhibitor, directly inhibits NHE1, just as single-specificity SGLT2 inhibitors including EMPA inhibit NHE1.
This study has several limitations. First, although the focus was on the inhibition of NHE1, to elucidate the exact molecular mechanism of the cardioprotective effect, it is necessary to study the interactions more precisely through loss-of-function and gain-of-function experiments on NHE1, SGLT1, and SGLT2. Second, although the use of zebrafish as an animal model in this study has various advantages, it may be difficult to apply the results to humans because zebrafish are nonmammalian. Third, the inhibitory effect of SGLT inhibitors on NHE1 was confirmed only in cardiomyocytes; it is still necessary to confirm the effects of SGLT inhibitors in various cells other cells constituting the heart, such as endothelial cells and immune cells. Finally, the cause of the decreased survival and morphological abnormalities observed in the high-dose SOTA-treated group has not yet been elucidated.
In conclusion, this study showed that EMPA, a highly selective SGLT2 inhibitor, and SOTA, a dual SGLT1/2 inhibitor, provide similar cardioprotective effects in a zebrafish model of DM-HFrEF. No significant differences in protective effects were observed due to the expected SGLT1 inhibition by dual SGLT1/2 inhibitors. However, both inhibitors showed a high binding affinity for NHE1. Therefore, we propose that NHE1 inhibition is an essential mechanism for the cardioprotective effects of SGLT2 and dual SGLT1/2 inhibitors. This study will help researchers understand the mechanisms by which SGLT2 inhibitors and dual SGLT1/2 inhibitors affect DM-HFrEF and provide important information about the potential benefits of these inhibitors for DM patients with HF. | The sodium-glucose cotransporter 2 (SGLT2) inhibitor empagliflozin (EMPA) and dual SGLT1/2 inhibitor sotagliflozin (SOTA) are emerging as heart failure (HF) medications in addition to having glucose-lowering effects in diabetes mellitus (DM). However, the precise mechanism underlying this cardioprotective effect has not yet been elucidated. Here, we evaluated the effects of EMPA and SOTA in a zebrafish model of DM combined with HF with reduced ejection fraction (DM-HFrEF). To compare the effects of the two drugs, survival, locomotion, and myocardial contractile function were evaluated. The structural binding and modulating effects of the two medications on sodium-hydrogen exchanger 1 (NHE1) were evaluated in silico and in vitro. DM-HFrEF zebrafish showed impaired cardiac contractility and decreased locomotion and survival, all of which were improved by 0.2–5 μM EMPA or SOTA treatment. However, the 25 μM SOTA treatment group had worse survival rates and less locomotion preservation than the EMPA treatment group at the same concentration, and pericardial edema and an uninflated swim bladder were observed. SOTA, EMPA and cariporide (CARI) showed similar structural binding affinities to NHE1 in a molecular docking analysis and drug response affinity target stability assay. In addition, EMPA, SOTA, and CARI effectively reduced intracellular Na + and Ca 2+ changes through the inhibition of NHE1 activity. These findings suggest that both EMPA and SOTA exert cardioprotective effects in the DM-HFrEF zebrafish model by inhibiting NHE1 activity. In addition, despite the similar cardioprotective effects of the two drugs, SOTA may be less effective than EMPA at high concentrations.
Diabetes: Glucose-lowering drugs protect against heart failure
Two existing drugs used to lower glucose in patients with diabetes also reduce the chances of heart failure by blocking the excessive activity of a protein involved in transport of molecules across membranes. The drugs empagliflozin and sotagliflozin, which inhibit sodium-glucose transporters in diabetes, also help protect against heart failure, but the mechanisms are unclear. Hae-Young Lee and Seung Hyeok Seok at Seoul National University, South Korea, and co-workers examined the cardio-protective effects of the drugs on zebrafish models of diabetic heart failure.They focused on sodium-hydrogen exchanger 1 (NHE1), which regulates sodium and hydrogen levels to maintain pH balance, and is increased in the heart ventricle tissue of patients with heart failure. The team found that both drugs blocked NHE1 activity, while also targeting sodium-glucose transporters. Empagliflozin was particularly effective at protecting the heart in this way.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s12276-023-01002-3.
Acknowledgements
The Zebrafish Center for Disease Modeling (ZCDM), Korea, provided the transgenic ( myl7:EGFP ) zebrafish strain. This work was supported by a grant from the New Faculty Startup Fund from Seoul National University to H.-Y.L. and by National Research Foundation (NRF) of Korea grants, funded by the Korean government, to S.H.S. (2020R1A2C2010202 and 2020R1A4A2002903).
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:00 | Exp Mol Med. 2023 Jun 1; 55(6):1174-1181 | oa_package/f3/69/PMC10318005.tar.gz |
|
PMC10364560 | 37477590 | Background
Young people in Malawi face multiple health challenges, most of them related to their sexual and reproductive health and rights (SRHR). Malawi faces one of the highest HIV prevalence rates, estimated at 8.9% for the general population, with one in three new HIV infections occurring among adolescents and youth aged 15–24 years. 1 Vulnerable adolescents and young people, including young people living with HIV (YPLHIV) and young people with disabilities (YPWD) share a compounded risk to poor sexual and reproductive health outcomes, as they have limited access to services and face stigma and discrimination. 2–4
As children grow older, it is important that they acquire knowledge, attitudes, skills and values related to the human body, intimate relationships and sexuality through comprehensive sexuality education (CSE). 5 CSE is described in UNESCO’s International technical guidance on sexuality education: an evidence-informed approach as In Malawi, CSE is delivered in-school, through the Life Skills Education curriculum that is designed to empower learners and their teachers to effectively deal with the social and health challenges and pressures affecting young people, including HIV and AIDS, teenage pregnancies, and various forms of abuses, and through various out-of-school programmes. 7
Unfortunately, most YPWD and YPLHIV do not get the opportunity to enrol or complete their education for reasons including the lack of reasonable accommodations within schools, discrimination and the lack of support from their carers, and thus miss out on the opportunity to obtain CSE in school. 8–10 We undertook a formative study to inform the design of implementation research that aims to generate evidence on delivery of CSE for out-of-school YPLHIV and YPWD in Malawi. The implementation research in Malawi is part of the Reaching those most left behind through CSE for out-of-school young people initiative which is being undertaken in Malawi, Ethiopia, Ghana and Colombia, with the support of the United Nations Population Fund (UNFPA). This paper will focus on understanding the demographic and social context of YPLHIV and YPWD, the scope of out-of-school CSE, and how the different delivery channels of CSE reach YPLHIV and YPWD in Malawi. | Methods and analysis
Study design
This formative study was carried out between June 2020 and April 2022 to inform the design for the delivery of CSE to out-of-school YPLHIV and YPWD. We used a desk review and a mapping exercise of programmes delivering CSE in Malawi to understand the context of YPLHIV and YPWD. The desk review involved searching the databases Medline and Google Scholar, and manual searching of any material related to CSE in Malawi published between 1st January 2000 and 31st March 2022. We relied on the expertise of the UNFPA Malawi Partner Network and the Malawi Ministry of Youth to identify CSE programmes or interventions being implemented in Malawi. We also used Google searches and organisation websites to identify any material related to CSE in Malawi. The mapping aimed at identifying the content covered and focus population (see Table 1 ). We searched the terms (“comprehensive sex education” OR “sex education” OR “sexua*” “education” OR “sexua*”) “information” AND (“Malawi”).
Studies inclusion and exclusion criteria
We looked at information on the demographic characteristics and context of YPLHIV and YPWD and at CSE programmes reaching (or not reaching) them. We excluded information on programmes with a primary focus on education but with no clear component of sexuality education or SRH information directed towards young people in Malawi.
Data synthesis and analysis
We used a narrative analysis to explore the situation and context of YPLHIV and YPWD in relation to the SRHR problems that they face. We synthesised the key themes and messaging for different CSE programmes in comparison with the needs of YPWD and YPLHIV. We further explored how existing programmes and methods of delivery include or fail to include these young people. We then summarised findings to identify the gaps that exist in the current CSE for YPWD and YPLHIV in Malawi. | Results
The situation of YPWD in Malawi
In Malawi, it is estimated that 10.4% of the population over 5 years old have at least one type of disability. 11 The most common types of impairment are limited sight, mobility and hearing. Evidence shows that rural and poor residents are more likely to have impairments leading to disabilities than their urban and wealthy counterparts. Furthermore, some impairments are treatable or can be corrected and yet persist due to inadequate access to health care 12 and assistive aids (such as corrective glasses). Thus, there is a strong linkage between poverty and disability in Malawi.
YPWD represent a vulnerable group due to stigma and discrimination which hinder them from social, economic, political and physical participation. Most of these young people are poor and rural residents and have limited access to health services for several reasons. First, they lack access to services due to long distances since most of them are from remote rural areas; this is exacerbated by poverty which limits their ability to finance their transportation. 4 Second, in instances where they are able to get to health facilities, they also face systematic barriers within health facilities, such as the lack of accommodative resources like use of sign language for those with hearing impairment, 13 braille resources, and trained service providers for young people with special needs. Furthermore, people with disabilities are culturally presumed to be asexual and are often not targeted with SRHR messages. They also face abuse from service providers and other service users at point of service, 13 , 14 thus further pushing them away from services.
The Government of Malawi has made great strides to ensure that YPWD have access to formal education, yet 35% of YPWD have never attended any education compared to 18% of the general population. 15 More YPWD drop out of school earlier due to limited resources to accommodate their needs 12 , 16 , 17 such as skilled teachers, braille facilities for the blind, different font materials for young people with albinism and hearing aids for young people with hearing impairments. 18 YPWD enrolled in mainstream education also face discrimination and isolation from their peers, further pushing them to drop out. 16 , 18 They may also drop out earlier as a result of frequent illness or chronic pain associated with their impairment. 19 In some instances, families may also not enrol their children due to the belief that their child cannot be educated without the resolution of their impairment. 12 Since CSE forms part of the curriculum, most YPWD thus miss out on the opportunity to attain school-based CSE.
The situation of YPLHIV in Malawi
HIV remains a big problem in Malawi. Adolescent girls and young women aged 15 to 24 years have a disproportionate burden of HIV, and are four times more likely to get infected than their male counterparts. 1 In addition, young people living in the urban areas or Southern region have a worse burden of infection than their rural or Central or Northern region counterparts.
Despite various campaigns to end HIV stigma within communities, YPLHIV still face a lot of stigma and discrimination associated with their own or their guardian’s HIV-positive status. 10 , 20 , 21 HIV stigma is based on misconceptions which devalue the social status of people living with HIV within communities, such that other members of the community may not wish to be associated with them. For instance, HIV infection is often attributed to promiscuity, which is culturally frowned upon. Furthermore, incorrect information about the transmission mechanisms of HIV pushes them into isolation, as people may fear getting infected by sharing items such as utensils and clothing with them. 20 , 22 , 23 Thus, YPLHIV face stigma by being stereotyped, excluded, or discriminated against due to their HIV status.
The stigma and discrimination that YPLHIV face also has negative consequences on their mental wellbeing. 21 , 24 While mental health challenges are common amongst all types of young people, YPLHIV have an increased risk of poor mental health 21 , 25 , 26 that manifests through low self-esteem, anxiety, isolation, depression and suicide ideation, which are exacerbated by the discrimination and isolation which they face. As YPLHIV often have to hide their HIV-positive status, 10 , 27 this creates anxiety as they explore intimate sexual relationships as well as friendships.
A proportion of YPLHIV were infected vertically through transmission from mother to child, 28 increasing their chances of being orphaned. 28 This. in turn. increases their vulnerability as they are placed in foster care, or left without any guardian, 29 and thus exposed to poverty. 30 As they often have to provide for themselves, YPLHIV have a higher likelihood of being exploited through involvement in child labour and transactional sex for girls. 31 , 32 Their involvement in transactional sex further subjects them to risky sexual behaviours as their power within the relationships is often compromised. 32
YPLHIV have access to education through schools; however, their completion rates are lower than the general population. Evidence suggests that YPLHIV drop out earlier for reasons that include discrimination from their peers while in school; frequent absenteeism due to AIDS-related illness; and involvement in child labour to provide for themselves. 10
In recognition of the discrimination and unique challenges that YPLHIV face, the Teen Club Programme was introduced to provide peer support. Established in 2002, teen clubs are exclusive monthly clinics for adolescents living with HIV. They provide both clinical services and psychosocial support. Through teen clubs, YPLHIV might share their experiences in various aspects of life and encourage each other. 33 Many facilities in Malawi have teen clubs that enrol children and adolescents living with HIV. 34 Through teen clubs, YPLHIV interact with their peers, address their sexual and reproductive needs and also access health services. Despite this, there are many social, economic and contextual factors that are not adequately or comprehensively addressed to respond to their needs as YPLHIV. 20 Furthermore, as they get older, they are required to transition to adult clinics, regardless of their level of preparedness to face the adult world.
Delivery approaches for CSE in Malawi
The key delivery approaches for CSE in Malawi include a school-based approach through the life skills curriculum designed for primary school and secondary school learners; a community-based approach offered through traditional or religious channels and civil society/non-governmental organisations (NGOs) through various peer groups or teen clubs.
In-school sexuality education
The Government of Malawi introduced life skills for secondary and primary school learners as a response to the growing HIV and AIDS epidemic in the country since around the late 1990s. 7 , 35 The curriculum combined lessons to help learners learn about their sexuality and explore life skills that would enable them to live a healthy and meaningful life. The life skills syllabuses were designed for learners from early years of primary schools all the way through to secondary school. Content is incremental as one progresses with schooling, and is also age-sensitive, designed under the assumption that younger scholars are in lower classes. 36 The curriculum centres around the core modules on health promotion (including information on prevalent diseases such as malaria, sexually transmitted infections, HIV and AIDS); social development (relationships and gender equality); moral development (culture and religions); personal development (self-esteem, decision making and skills development) and physical development.
Community-based sexuality education
In Malawi, initiation ceremonies are a common practice with participation estimated at 43% for girls and 35% for boys, with more rural adolescents than urban adolescents. 37 Initiation ceremonies are a common space in Malawi for the delivery of sexuality education, including responsible sexual and reproductive behaviour. Ceremonies take various forms; they can either be delivered by churches or cultural custodians, although the messages shared across are quite similar, but with varying levels of detail. 38 Topics covered generally include personal hygiene, respect, sexually transmitted infections and HIV and pregnancy implications from menarche. Traditionally, elderly women are invited to talk to a girl as soon as she starts her first menstruation. 37 , 38 In some instances, all girls who started their first menstruation during that year are brought together and counselled by women through initiation ceremonies where messages are put across through songs and traditional dances. 38 We found no evidence to indicate inclusion or exclusion of YPWD; however YPLHIV may be included and encouraged to live positively. 39
Civil society/NGOs sexuality education
CSE in Malawi is also delivered by NGOs. These organisations take different approaches, usually through teen clubs or peer groups. Different NGOs target different groups: for example, girls only, out-of-school youth and in-school-youth. For example, UNICEF programmes have addressed school-age children through school and out-of-school clubs and their activities have included exchange visits between clubs and the peer education and youth-led awareness campaign. 40
CSE mapping of programmes
Our mapping exercise shows that there are a number of CSE programmes being implemented within the country by both local and international organisations (see Table 1 ). The programmes reach adolescents, ages 10 years old to 19 years old, and young adults, ages 20 to 25 years old. Programmes vary, with some reaching in-school youth, others out-of-school of youth and most programmes reaching both. Most of the programmes reach young people from the general population, and only a few reach marginalised youth such as YPLHIV and YPWD. In the sampled programmes, only two indicated that they are working with YPWD, and one indicated CSE messaging that was specific for YPWD. CSE is also often delivered through teen clubs, and enrolment into these clubs is often through existing community youth networks. Thus marginalised teens who do not have pre-existing networks are often left out. Unfortunately, since YPWD and YPLHIV face discrimination and stigma, they do not often have networks outside their peers with similar characteristics, and are thus excluded from general community programmes.
The topics covered through the CSE programmes vary. Most cover topics on HIV and STIs, sexuality and sexual relationships, menstruation, pregnancy prevention and condom use, building self-esteem and goal setting, and gender-based violence and power dynamics. A few, programmes also include topics on advocacy, abortion, pleasurable sex, nutrition and vocational skills. Despite covering topics on HIV, most of the content is around prevention, such as condom use, abstinence and HIV testing, but only a few programmes cover content on living with HIV. Thus, while HIV is a popular topic, only a few programmes cover content that addresses the specific needs that YPLHIV face.
The mapping shows that most projects operate in a few districts at a time, with the number of target districts ranging from one to six districts out of the 28 districts in Malawi. This means that CSE is only delivered to a fragment of the population.
Gaps in CSE delivery in Malawi
The gaps and challenges of current CSE programming to reach and address the needs of YPWD and YPLHIV fell under three broad categories: lack of consistency in CSE content; lack of resources and fragmentation of programming; and lack of inclusiveness of marginalised groups.
Lack of consistency in CSE content
CSE remains a controversial topic in Malawi, and has suffered resistance from both cultural and religious groups. As a result, the information that is delivered varies from programme to programme, with some providing less information than others. 38 , 39 , 41 Furthermore, the information provided might also be contradictory from one programme to another. Some promote sexuality ideals such as abstinence-only and being faithful. Others may provide more information on sexuality issues that young people may face, such as acknowledging that sexual activity exists among children, as sexual beings, and including abortion and sexual orientation. This lack of consistency is across all delivery methods. Teachers may skip certain topics in a life skills curriculum to align with their personal beliefs, 36 , 42 and similarly, information offered through initiation ceremonies also varies. Thus, the lack of consistency in content may result in sexuality education that is not comprehensive and sufficient to respond to the needs of YPWD and YPLHIV as sexual beings.
Lack of resources and fragmentation of programming
Delivery of sexuality education requires commitment and investment in resources. Unfortunately, resources for CSE have remained a challenge, resulting in fragmentation of programming. For instance, the majority of NGOs work in limited geographical areas and only reach a limited number of young people at a time. Evidence also shows a lack of appropriate teaching and learning methodologies, the need to train teachers and to develop additional materials for use in all classes. 7 , 43
Lack of inclusiveness of marginalised groups
CSE in Malawi has followed traditional delivery approaches and very few programmes have provided CSE that is accessible to YPWD. Schools in Malawi generally lack accommodative resources such as braille facilities and sign language experts. 8 , 18 Similarly, very few programmes have invested in accommodative material and there is likewise a lack of literature to suggest the inclusion of YPWD in initiation ceremonies. Thus, there is need for development of training materials specifically for marginalised groups, and investments in facilitators to support delivery of CSE that ensures no groups of young people are left behind. Further limitations to inclusion are embedded in the core design of the school syllabus, that is, more information is given in higher classes. Unfortunately, most young people drop out before acquiring any meaningful skills; 2% of children drop out after one year of schooling, 35% complete primary education (eight years of schooling) and 18% complete secondary education (12 years of schooling). 15 Furthermore, YPWD and YPLHIV 10 , 15 have an increased risk of school drop-out, thereby making it unlikely that the majority of them will have obtained CSE through education. | Discussion
Our study set out to provide a situation analysis of YPLHIV and YPWD, map CSE programmes and explore how CSE delivery channels reach the target populations. CSE in Malawi is delivered through three main channels: in school through the life skills education curriculum; through traditional or religious initiation rites; and through civil society/NGO programmes. We mapped a range of CSE programmes with varying levels of content covered, target populations and implementation districts. Most programmes cover content on HIV and STIs, sexuality and sexual relationships, menstruation, pregnancy prevention and condom use, building self-esteem and goal setting, and gender-based violence and power dynamics. However, there are limited resources for YPWD.
Despite most programmes covering content on HIV knowledge and prevention, very few programmes are designed for YPLHIV. HIV content covered is often on condom use, HIV prevention and testing, 39 but there is little guidance on how these young people can navigate life as individuals as well as sexual beings living with HIV. As YPLHIV, they have unique needs and also often face discrimination and stigma associated with their HIV status. 3 , 10 , 44 The discrimination due to their HIV status affects their desire to associate with others, thus forcing them into isolation. 45 This, in return, has an impact on their educational attainment 10 and antiretroviral therapy retention, 3 among other things. Thus, YPLHIV require specially curated spaces where they can freely interact with their peers without judgement while receiving skills to guide them in navigating various aspects of their lives. A study targeting adolescents living with HIV in South Africa reported that young people both found comfort in teen clubs as spaces for learning about their sexuality, and preferred accessing treatment from adolescent clinics. 3 These findings are also echoed in other studies. 33
While HIV topics are covered in the various programmes, there is a complete lack of content specifically designed for YPWD. Similar to their peers living with HIV, YPWD also face discrimination and isolation associated with their disability. Even in instances where they are not isolated, they require accommodations in order for them to fully access various services. Unfortunately, most public services, including health and education, do not provide these accommodative resources, thus pushing these young people into social and economic isolation. Furthermore, YPWD are vulnerable to abuse and exploitation due to their compromised physical or mental abilities. They often need to rely on other people for support, which subjects them to further vulnerability, both in the home and in the community. Despite this increased vulnerability, YPWD are culturally often assumed to be asexual, and thereby not in need of any sexuality education. 46 Yet YPWD face an increased vulnerability to abuse. Thus, they too require safe spaces, where they are recognised as sexual beings and can learn and interact with their peers.
The mapping of programmes shows that there are a number of programmes offering sexuality education. These programmes target various groups of young people and cover various topics. They are beneficial in that they offer sexuality education for out-of-school youth, who may otherwise not have been able to gain sexuality education through schooling. Unfortunately, the programmes are also subjected to funding ceilings which affect their scope and reach, evidently with most programmes only being implemented in a few districts. As such, most of these programmes are often fragmented, reaching very few young people. Most importantly, already marginalised groups such as YPWD are also excluded from these programmes as specialised facilitators and increased resources are required in order to reach them. Beyond content, focus also has to be placed on duration of the programme, as studies show better behavioural changes for programmes with longer duration and intensity of CSE. 47 Even for programmes reaching vulnerable groups, there is no system of monitoring of activities, and a lack of harmonisation and consistency in the content that is delivered, 42 thus providing sexuality education that may not be comprehensive.
The current study highlights gaps in consistency in CSE content, lack of resources resulting in fragmentation of programming and the lack of inclusiveness of marginalised groups. This suggests the need for central government commitment to advancing CSE in Malawi. There is a need for policies, guidelines and standards on the design and delivery of all CSE programmes in Malawi. These guidelines would provide minimum standards for CSE programming, thereby ensuring consistency in the content as well as methods of delivery. Furthermore, central level coordination of CSE would ensure equity in the geographical allocation of CSE programmes thus ensuring that CSE reaches all, while also ensuring that marginalised groups are not left out in Malawi. Therefore, there is a need for consistent and adequate CSE to be delivered both in-school and out-of-school to ensure that no young people are left behind. Deliberate efforts must be made to target young people who are unable to access CSE through traditional channels such as schools, or young people with specific needs, including YPWD and YPLHIV, and provide them with sexuality education that is comprehensive and also adapted to their unique needs.
Our study presents only a snapshot of the situation of YPWD and YPLHIV in Malawi. We acknowledge the lack of primary research in our methods to capture the views and lived experience of the focal population and of other key informants. There is also limited literature that captures the SRHR of YPWD and YPLHIV and few studies have been done on CSE in Malawi. As a result, findings from this study are drawn from very few studies as we did not conduct a full systematic review of the literature. Furthermore, due to the limited literature, the narrative analysis techniques we employed were unsystematic, 48 such that the findings were drawn specifically to provide a formative picture for the design of a CSE delivery intervention. Nonetheless, our study provides valuable insights into the current situation of YPWD and YLHIV and it highlights the significance of new research as well as refocused programming of CSE to meet their needs. | Conclusions
The study describes the challenges that YPLHIV and YPWD face: stigma, discrimination and isolation, as well as being vulnerable to abuse, all of which highlight their need for CSE. The study also highlights the different programmes that are being implemented to provide CSE to young people and the various modes through which CSE is delivered. These CSE programmes are often fragmented, implemented in a few districts at a time and do not target all youth, thus leaving already marginalised populations behind. In instances where YPWD and YPLHIV may have access to CSE, they do not always receive CSE content that is appropriate for their needs. Thus, the study findings highlight the need for CSE programming that is designed to respond to the needs of YPWD and YPLHIV while also recognising the discrimination and isolation that YPWD and YPLHIV face.
Furthermore, to make CSE more inclusive, there is need for continued community engagement to make it more acceptable by changing conservative attitudes and values that hamper adolescents’ SRHR. Furthermore, facilitators of CSE need to be fully trained to meet the needs of young people while delivering sexuality education that is comprehensive and accessible to all. | Abstract
This formative study was undertaken between June 2020 and April 2021 to provide evidence to inform the design and delivery of comprehensive sexuality education (CSE) in Malawi for young people living with HIV (YPLHIV) and young people with disabilities (YPWD). The study included a desk review of the situation of these two groups and a mapping of CSE programmes and delivery approaches in Malawi. The study findings show that YPWD and YPLHIV in Malawi are marginalised groups, face stigma and discrimination, and are more vulnerable to abuse, warranting CSE that addresses their needs. Yet, they are often left out of sexuality education such as school-based programmes (due to early school drop-outs) and out-of-school programmes, as well as traditional modes. Furthermore, in instances where they have access to sexuality education, there is little evidence to suggest that the sexuality education that they receive is designed to address their needs, thus raising questions about its relevance. There is need for tailored CSE that addresses the needs of these groups and that is delivered using an approach that is easily accessible to them.
Résumé
Cette étude formative a été entreprise entre juin 2020 et avril 2021 pour recueillir des données susceptibles de guider la conception et la réalisation d’une éducation complète à la sexualité au Malawi pour des jeunes vivant avec le VIH et des jeunes handicapés. L’étude comprenait un examen préliminaire de la situation de ces deux groupes et un inventaire des programmes et des approches de prestation de l’éducation complète à la sexualité au Malawi. Les résultats de l’étude montrent que les jeunes handicapés et les jeunes vivant avec le VIH au Malawi sont des groupes marginalisés, en butte à la stigmatisation et à la discrimination, et plus vulnérables à la maltraitance, ce qui justifie une éducation complète à la sexualité qui réponde à leurs besoins. Pourtant, ils sont souvent exclus des modes d’éducation à la sexualité tels que les programmes scolaires (en raison d’un abandon précoce des études) et les programmes extrascolaires, ainsi que des modes traditionnels. De plus, lorsqu’ils ont accès à l’éducation à la sexualité, il ne semble guère établi que l’éducation à la sexualité qu’ils reçoivent soit conçue de façon à répondre à leurs besoins, ce qui soulève des questions quant à sa pertinence. Il est nécessaire de mettre en place une éducation complète à la sexualité qui réponde aux besoins de ces groupes et qui soit dispensée au moyen d’une approche qui leur soit aisément accessible.
Resumen
Este estudio formativo fue realizado entre junio de 2020 y abril de 2021 con el fin de generar evidencia para informar el diseño y la entrega de educación integral en sexualidad en Malaui para jóvenes que viven con VIH y jóvenes con discapacidad. El estudio consiste en una revisión documental de la situación de estos dos grupos y un mapeo de los programas de educación integral en sexualidad y los enfoques de entrega en Malaui. Los hallazgos del estudio muestran que las personas jóvenes con discapacidad y jóvenes que viven con VIH en Malaui son grupos marginados, enfrentan estigma y discriminación y son más vulnerables al maltrato/abuso, por lo cual merecen recibir educación integral en sexualidad que atienda sus necesidades. Sin embargo, a menudo estas personas no son incluidas en los programas de educación en sexualidad, tales como programas escolares (debido a que abandonan sus estudios a temprana edad) y programas fuera de la escuela, ni en modos tradicionales. Además, en casos donde tienen acceso a educación en sexualidad, existe poca evidencia que indique que la educación en sexualidad que reciben está diseñada para atender sus necesidades, por lo cual se pone en duda su pertinencia. Se necesita una educación integral en sexualidad adaptada para atender las necesidades de estos grupos y que se entregue por medio de un enfoque que les sea fácil de acceder.
Keywords | Disclosure statement
No potential conflict of interest was reported by the author(s). | CC BY | no | 2024-01-16 23:36:47 | Sex Reprod Health Matters.; 31(2):2226345 | oa_package/26/43/PMC10364560.tar.gz |
|
PMC10378336 | 37508647 | Pediatric endocrinology will undergo an extraordinary revolution this century. It has also become increasingly apparent that experiences and exposure throughout fetal life and childhood can have important effects on many diseases that develop much later during adult life. Alterations to the endocrine system mediate many of these long-term effects, and understanding how these changes occur and how they translate into compromised health in adulthood have become additional questions that confront pediatric endocrinologists [ 1 , 2 , 3 ].
I shall quote just a few examples of the great challenges that wait to be tackled in the near future [ 2 ].
There is increasing evidence that the hypothalamus is involved in the regulation of energy balance and glucose homeostasis and, ultimately, the occurrence of diabetes. In humans, however, studies on the brain’s control of metabolism and the endocrine system are difficult to perform, and only translational research will provide robust data that can be translated into clinical practice. If it were not for this lack of data, the world of pediatric medicine world be at the final stages of research [ 4 , 5 ].
For almost a century, endocrine research has focused on the interaction between hormones and their receptors, providing major advances in the understanding of endocrine system physiology. More recently, the research focus has shifted to the pathways downstream of such receptors. The study of intracellular hormone signaling has revealed that some endocrine diseases may originate from alterations in signal transduction. It is easy to foresee that further investigations will reveal that many as of yet unexplained endocrinopathies are the result of subtle alterations in intracellular signaling [ 6 ].
Novel genes have recently been found to play a pivotal role in the regulation of reproduction and growth. Genome-wide association analysis has revealed the influence of many unexpected genes in the control of growth, puberty, and diabetes [ 7 ]. This new large-scale genetic approach challenges researchers to obtain more complete descriptions of the susceptibility architecture of endocrine traits and to translate the information gathered into improvements in clinical management [ 8 ]. However, the mechanisms through which genetic information is translated into phenotypic features and diseases as well as those underlying the interaction between hundreds of genes are still largely unknown. Genetic manipulation of experimental species, which utilizes transgenic and gene-knockout technology, has led and will lead to important advances in determining the relationship between genes and the function of their encoded proteins in the intact organism [ 9 ].
Alterations in the embryo–fetal and early postnatal hormonal environment, caused by either maternal diet or exposure to environmental factors, can modify the epigenome, and these modifications are inherited in somatic daughter cells and maintained throughout life, ultimately leading to permanent metabolic and endocrine changes. We are at the beginning of the epigenomics era which may provide important insights that can be translated into interventions to revert epigenetic programming [ 10 ]. The early prevention of adult diseases is at present a primary objective of pediatrics in general and pediatric endocrinology in particular. Obesity, diabetes, hypertension, and cardiovascular disease in adulthood may originate during embryo–fetal development and early postnatal life. Therefore, elucidation of the environmental, (epi)genetic, and endocrine mechanisms leading to long-term metabolic risk represents the primary task of pediatric endocrinologists [ 8 , 9 , 10 , 11 ].
The last three decades have witnessed growing concerns over the potential adverse effects that may result from exposure to a group of chemicals that have the potential to alter the normal functioning of the endocrine system in wildlife and humans. Potential adverse outcomes in both wildlife and humans have focused mainly on reproductive and sexual development and function, altered immune and nervous system function, thyroid function, and hormone-related cancers. Analysis of the human data in isolation, while generating concerns, has so far failed to provide firm evidence of direct causal associations between exposure to endocrine disruptors and adverse health outcomes. Our current understanding of the effects posed by endocrine disruptors on humans is incomplete. Uncertainty over the possible effects of chronic exposure to a number of chemicals with endocrine-disrupting potential and the fundamental role played by the endocrine system in maintaining homeostasis make the study of the effects posed by exposure to these chemicals a worldwide research priority [ 10 , 11 , 12 ].
Further challenges come from the development of novel therapeutic approaches for endocrine diseases in childhood [ 13 ]. New drug formulations, individualized treatment based on pharmacogenomics, as well as gene and stem cell therapies represent further research fields for pediatric endocrinologists of the 21st century.
Below are just a few examples of the potential research fields: Long-acting growth hormone; Novel long-acting or ultra-rapid-acting insulin; New therapies for diseases such as achondroplasia; New panel genes research.
The above challenges present a real opportunity for the future and are key challenges for both pediatricians and their patients. | Conflicts of Interest
The author declares no conflict of interest. | CC BY | no | 2024-01-16 23:35:05 | Children (Basel). 2023 Jun 30; 10(7):1151 | oa_package/0b/3d/PMC10378336.tar.gz |
||||||
PMC10427713 | 37582804 | Introduction
Ultra-thin ferromagnetic insulators (FMI) with perpendicular magnetic anisotropy (PMA) and low damping provide new opportunities for inducing emergent magnetic and topological phenomena at interfaces and efficiently sourcing, controlling and detecting pure spin currents, thereby changing the landscape of spin wave devices. FMIs with PMA have been shown to stabilize topological defects in the form of skyrmions that are robust to perturbations, and have been predicted to give rise to the quantum anomalous Hall effect when interfaced with topological insulators 1 – 3 . They also support the manipulation and isotropic propagation of spin waves in the absence of dissipative charge currents, providing a new paradigm for energy efficient spin-based computing and memory 4 – 9 . However, crucial to the success of PMA FMIs in applications is the presence of two additional features: low magnetic damping in the ultra-thin regime and high interface quality with adjacent spin-to-charge conversion layers. These factors optimize performance by decreasing switching current in spin-transfer-torque MRAM (STT-MRAM) 10 , increasing domain wall velocity in racetrack memory 11 or increasing Dzyaloshinskii-Moriya interaction (DMI) strength to aid formation of skyrmions 12 , among other effects.
Previous reports of FMI thin film systems that possess both PMA and low damping were largely devoted to garnet structure systems 5 , 13 – 15 . Although garnets are well known for their low damping, many studies have shown the presence of a significant magnetic dead layer at the film-substrate interface due to interdiffusion, likely a result of their high growth temperatures of 600–900 °C 16 – 19 . However some studies have shown that a sharp interface with bulk magnetization properties can be obtained in garnet films 15 , 20 , 21 . The interdiffusion layer places an undesirable lower bound on the thickness of the magnetic layer, which when combined with the complex crystal structure of the garnets makes them difficult to integrate into heterostructures for applications. More importantly, Pt, the heavy metal (HM) of choice in FMI-based spintronics studies, grows amorphously or incoherently on these FMIs, impacting the efficient transfer of spin across the interface.
In search of an FMI that exhibits PMA and low damping at extremely low thicknesses while having a low thermal budget and high quality interfaces with HMs, we have realized a new class of ultra-thin low loss spinel structure Li 0.5 Al 1.0 Fe 1.5 O 4 (LAFO) thin films. Structural characterization demonstrates a highly crystalline and defect-free film. When grown on MgGa 2 O 4 (MGO) substrates, LAFO demonstrates strain-induced PMA and ultra-low magnetic damping as low as α = 6 × 10 −4 for 15 nm thick films and on the order of values reported in yttrium iron garnet (YIG) systems with PMA at room temperature 5 , 14 , 22 . Bilayers of LAFO and Pt exhibit efficient transfer of spin current from the HM to the FMI and have been attributed to the high quality of the Pt/LAFO interface. Spin-orbit torque (SOT) switching in Pt/LAFO is demonstrated with critical switching currents as low as 6 × 10 5 A/cm 2 . Note that this value is for a FMI system and is an order of magnitude lower than typical values of 10 7 A/cm 2 typically observed in garnet/Pt systems at similar fields 5 , 23 , 24 . We also estimate the spin torque efficiency using harmonic Hall measurements and extract large damping-like spin torque efficiencies. The combination of low damping, PMA, absence of magnetic dead layers, epitaxial Pt overlayers and low current density SOT switching in LAFO films demonstrates a new class of magnetic insulating thin film materials for spin wave-based spintronics. To this end, LAFO has already been incorporated in building hybrid spin-Hall nano-oscillators which are essential in accelerating spintronic applications 25 . | Methods
Sample fabrication
Films of LAFO were synthesized via pulsed laser deposition (PLD) with a KrF laser ( λ = 248 nm) on (001)-oriented single crystal MGO substrates. The MGO substrates of size 5 × 5 × 0.5 mm 3 were prepared from high quality bulk single crystals grown by the Czochralski method at the Leibniz-Institut für Kristallzüchtung, Berlin, Germany, as described in detail elsewhere 42 – 44 . A pressed Li 0.6 Al 1.0 Fe 1.5 O 4 target was used for ablation, which includes an additional 0.1 Li enrichment to compensate for Li loss during deposition due to Li volatility. Prior to deposition, the target was pre-ablated at 1 Hz for 1 min, followed by 5 Hz for 1.5 min in vacuum along a circular track of radius 0.75 cm at a laser fluence of ~1.9 J/cm 2 . The substrates were cleaned via sonication in acetone and isopropanol for 10 min each. The deposition was then performed on the pre-ablated track in a 15 mTorr O 2 atmosphere with a substrate temperature of 450 °C, target-to-substrate distance of 3 in, and a laser fluence of 2.8 J/cm 2 operating at 2 Hz. The laser spot size was 6 mm 2 . After deposition, the substrate was left to cool to ambient temperature in 100 Torr O 2 . These deposition parameters give rise to a growth rate of ~0.0125 nm/pulse, and the resulting films are insulating with resistances greater than our measurement limit of 1 GΩ. It is worth noting that the synthesis temperature of LAFO is considerably lower than the 600–900 °C required for low damping epitaxial garnet films 45 – 47 . This makes LAFO compatible with a wider range of materials that can not tolerate high processing temperatures.
HAADF-STEM imaging
The HAADF-STEM imaging was performed using a Titan 60-300 TEM operated at an accelerating voltage of 300 kV. Samples for cross-sectional transmission electron microscopy were prepared by focused ion-beam (FIB) milling using Ga-ion source. Prior to TEM observations an additional Ar-ion polishing at low voltage (500–700 V) was performed in order to remove the residual surface Ga and reduce FIB-induced sample roughness.
FMR measurements
Broadband ferromagnetic resonance (FMR) measurements were performed on a custom built FMR setup consisting of a copper waveguide with a center conductor width of 250 μ m between two electromagnets. A small modulation field of 2–3 Oe was applied on top of the DC field and a lock-in amplifier was used for signal detection after filtering through a microwave diode. The measured signal is the absorption derivative dI fmr /dH, which is then fit to a Lorentzian derivative of the form where the first and second terms represent the absorptive and dispersive components of the FMR spectrum respectively. From this fit, we can extract the FMR half-width-half-maximum linewidth Δ H hwhm and the FMR resonance field H fmr .
Second harmonic measurements
Second harmonic Hall measurements were performed with an AC current of frequency ω /2 π = 524.1 Hz and RMS density J rms ≈ 3.5 × 10 6 A/cm 2 . The first ( ) and second ( ) harmonic Hall voltages are then simultaneously measured using lock-in amplifiers as a function of in-plane angle γ . Note that is the in-phase component whereas is the quadrature component.
XAS and XMCD
Room temperature XAS and XMCD were performed at beamline 4.0.2 of the Advanced Light Source, Lawrence Berkeley National Laboratory. A magnetic field of ±0.1 T was applied perpendicular to the film plane and the X-ray absorption spectra was measured as a function of energy with the X-ray beam fixed at negative circular polarization. | Results
Structural characterization
LAFO films were grown by pulsed laser deposition on (001)-oriented MgGa 2 O 4 (MGO) substrates (see Methods). Structural characterization of LAFO films indicates excellent epitaxy and crystallinity. Figure 1 a shows an atomic resolution high-angle annular dark-field scanning transmission electron microscopy (HAADF-TEM) image of the microstructure along the [110] direction with no evidence of defects or interfacial layers which is corroborated by pronounced Kiessig fringes in X-ray reflectivity spectra (Supplementary Fig. S2 ). Figure 1 b shows symmetric 2 θ − ω X-ray diffraction (XRD) scans around the (004) peak of 15.1 nm and 4.1 nm thick LAFO films. Clear Laue oscillations can be seen around the 15.1 nm (004) film peak indicating coherent diffraction, whereas the 4.1 nm film is too thin to show Laue oscillations due to the substantial peak broadening associated with the finite film thickness. The film peak is shifted to higher angles compared to that of bulk LAFO indicating the reduction of the c -axis lattice constant. Note that the broadening of the 4.1 nm film peak is not due to poor film quality, but rather due to the lower thickness. Reciprocal space map of the LAFO and MGO peaks in Fig. 1 c further indicates the coherence of the in-plane film and substrate lattice parameters. Note that the RSM is taken on an asymmetric peak with an out-of-plane component in order to capture information for both the in-plane and out-of-plane reciprocal vectors. Together, these results confirm epitaxial and dislocation-free growth of the LAFO film under coherent tensile strain on the MGO substrate (see Supplementary Material Section 1 ). Our results in the remainder of the manuscript will be focused on the 15.1 nm and 4.1 nm films. Magnetic characterization will be presented for the thicker film due to its cleaner signal, but SOT experiments will be performed on the thinner film due to its weaker anisotropy and ease of switching.
Pt/LAFO heterostructures show an epitaxial relationship between Pt and LAFO. We note that epitaxy does not mean single crystalline or in-plane aligned but merely the registry of the Pt layer with the underlying LAFO layer. This epitaxy differentiates the Pt/LAFO system from other Pt/FMI systems such as Pt/garnet bilayers. In Fig. 2 a we show high-resolution TEM of the Pt/LAFO interface in which the transition between LAFO and Pt occurs within a monolayer. This smooth interface is further corroborated by the pronounced Kiessig fringes in X-ray reflectivity (Supplementary Material Section 1 ). The presence of Pt epitaxy is indicated by a prominent Pt (111) peak as seen in symmetric XRD scans in Fig. 2 b. In-plane XRD scans shown in Fig. S2c reveal a complex epitaxial relationship between the Pt and LAFO involving a twinning pattern of the Pt domains, which is diagrammatically represented in Fig. S2d . Due to the three-fold symmetry of Pt [111] out-of-plane oriented unit cell and four-fold symmetry of LAFO/MGO [001] out-of-plane oriented unit cell, the Pt layer exhibit four twins that are rotated in-plane by 30 degrees from each other. This texturing manifests as 12 distinct in-plane peaks shown in (c). This epitaxial, but not in-plane aligned, growth of Pt on LAFO is in contrast to its incoherent growth on other materials 26 , 27 . The high quality interface facilitates the efficient transfer of spin current from the Pt to the FMI.
Magnetic characterization
We performed SQUID magnetometry on LAFO films using an Evercool MPMS by Quantum Design at room temperature. We measure the magnetization as a function of field along the in-plane and out-of-plane directions. These measurements show that LAFO films exhibit PMA with bulk saturation magnetization values M s ≈ 75 kA/m at room temperature. Figure 3 a shows magnetization ( M ) as a function of external magnetic field ( H ) on the 15.1 nm film. The magnetic easy axis lies out-of-plane along the [001] direction with a low coercivity of 1.6 mT and 100% remanence as shown in Fig. 3 b. Conversely the in-plane [100] axis is magnetically hard, requiring a field of about 0.5 T for saturation. The small opening around zero field of the in-plane trace is due to a small misalignment of the sample with respect to the in-plane magnetic field. The origin of the PMA is due to magnetoelastic coupling as a result of the epitaxial bi-axial tensile strain on the film imposed by the substrate (see Supplementary Material Section 3) 22 , 28 , 29 . We show only the data for the 15.1 nm film here, but the M s and PMA are maintained across a range of LAFO thicknesses (see Supplementary Material Section 2 ). These results are consistent with the absence of a magnetic dead layer in the TEM data (Fig. 1 a) present in other FMI thin film systems 16 , 30 .
In terms of dynamic magnetic properties, LAFO films exhibit extremely low damping values. To characterize the damping, we perform room temperature broadband ferromagnetic resonance (FMR) in the out-of-plane direction. We fit each FMR spectra using a Lorentzian derivative lineshape, from which we obtain the FMR linewidth μ 0 Δ H hwhm and FMR resonance field μ 0 H fmr . We then study the dependence of the linewidth and FMR resonance field as a function of the microwave frequency f as shown in Fig. 3 c and d. The analysis is described in Supplemental Material Section 3 , from which we can extract the Gilbert damping parameter α = (6.4 ± 0.6) × 10 −4 and an inhomogeneous broadening of μ 0 Δ H 0 = 1.5 ± 0.1 mT for the 15.1 nm film. This value of α is the lowest reported to date for spinel structure FMI films and is approaching those reported in YIG with PMA 5 , 14 , 22 . The inhomogeneous broadening is also similar to those observed in PMA YIG 5 . We also extract the Landé g -factor as g = 2.001 ± 0.001 and the effective magnetization μ 0 M e f f = − 445.2 ± 0.4 mT. The value of g is close to the free electron value of 2.0, which implies low spin-orbit coupling. This is not surprising as magnetism in LAFO arises primarily from Fe 3+ (with L = 0). The value of M e f f is also in excellent agreement with the field required to saturate along [100] as seen in Fig. 3 a. In the past, spinel structure magnetic insulators were thought to be advantageous over the garnets due to their simpler crystal structure and lower synthesis temperature, but their damping values were consistently higher 22 , 28 , 29 , 31 . Our results show that spinel FMIs can also possess damping values that are competitive with the garnets.
Spin-orbit torque switching
To demonstrate SOT switching in LAFO, we interface our magnet with a high spin-orbit coupled metal, Pt. The Pt layer was deposited via room temperature sputtering on top of LAFO films. The critical current density required for current-induced SOT switching depends on a number of factors, including the anisotropy strength and the spin-charge conversion efficiency of the Pt/LAFO interface. In order to spin-orbit torque switch the LAFO film, we found that minimization of the LAFO thickness, accompanied by a weaker perpendicular magnetic anisotropy, reduces the critical current density for spin-orbit torque switching. Therefore we focus on the 4.1 nm samples in this section. Pt(2 nm)/LAFO(4.1 nm) Hall bars exhibit critical current densities as low as 6 × 10 5 A/cm 2 , one of the lowest observed to date for PMA FMIs at room temperature. Figure 4 a shows the Hall resistance, R x y , measured as a function of an out-of-plane magnetic field, H z , with the linear ordinary Hall contribution subtracted out. The presence of a hysteretic anomalous Hall effect is observed and likely emerges from the transfer of spin angular momentum across the Pt/LAFO interface and allows us to distinguish the up and down magnetization states 32 . A charge current passing through the Pt generates a spin current that travels towards the LAFO via the spin Hall effect. The direction of the moments in the LAFO imposes the boundary condition at the interface for spin accumulation, which in turn modifies the spin current in the Pt. The modified spin current in the Pt then produces additional transverse voltages via the inverse spin Hall effect, which is detected as a “spin Hall" anomalous Hall effect. Supplementary Material Section 4 provides a more detailed discussion of spin Hall magnetoresistance effects in Pt/LAFO bilayers.
We apply 5 ms-long DC current pulses along the Hall bar and measure R x y after each pulse in the presence of a small in-plane field, H x , oriented along the current injection axis. Figure 4 b shows R x y as a function of the pulsed current density, J D C , in a field of H x = ± 3 mT with a critical current density ∣ J c ∣ ≈ 1.5 × 10 6 A/cm 2 . Typical critical current densities at this field for 4 nm LAFO films range from 1 − 1.5 × 10 6 A/cm 2 . Also as seen in Fig. 4 b, the switching polarity reverses direction on reversing the applied field direction + H x → − H x as expected from the SOT switching mechanism.
To show that the switching is repeatable, we show R x y measured during a sequence of current pulses with alternating sign, and magnitude of 7.5 × 10 6 A/cm 2 in Fig. 4 c. The value of R x y switches with the pulses and recovers nearly 100% of its full value of about ± 3.1 mΩ after each pulse, demonstrating consistent and reversible switching. By performing the same switching experiments at different in-plane fields, we observe J c monotonically decrease with increasing H x as expected 33 . This is shown in Fig. 4 d, where J c as low as 6 × 10 5 A/cm 2 is achieved for H x = 8.5 mT. As a comparison, a study of a YIG(5 nm)/Pt system reported a switching current of 3 × 10 7 A/cm 2 for H x = 5.0 mT, more than an order of magnitude larger than the corresponding value in our system (Fig. 4 d) 23 . Other studies in YIG and thulium iron garnet systems have reported J c values on the order of 10 7 A/cm 2 at fields on the order of a few tens of mT 5 , 24 . We note however that a comparison of J c across different systems is not meaningful as the value of J c also depends strongly on the strength of the PMA and the thicknesses of the magnetic and Pt layers. We also performed SOT switching in a 15 nm LAFO film (Supplementary Materials Section 7 ), where the J c is comparable to those reported in Pt/YIG systems due to the larger anisotropy of thicker LAFO films. However, LAFO provides a rare combination of weak PMA at ultra-low thicknesses, both of which help lower J c . This combination allows the realization of ultra-low J c at low thicknesses.
The presence of an in-plane field is necessary to break the symmetry in order to achieve deterministic switching, but is detrimental for device applications. Field-free switching has been achieved in select systems by breaking the symmetry in other ways, including using exchange bias and physically engineering an asymmetric stack 34 , 35 . We hope that similar techniques can be incorporated with LAFO in the near future to increase its practicality.
The SOT switching can also be characterized by SOT efficiency which is a measure of spin to charge conversion of the Pt/LAFO bilayer. SOT efficiency takes into account the spin transparency of the interface and spin Hall angle of Pt and can be estimated with in-plane angular harmonic Hall measurements. An in-plane magnetic field B ext = μ 0 H ext larger than the anisotropy (see Supplementary Material Sections 4 and 5 ) is applied at an angle γ with respect to the current channel (Fig. 5 a inset) and an AC current is used to measure the Hall voltages. For a system with low in-plane anisotropy, the first ( ) and second ( ) harmonic Hall voltages are given by 36 – 38 where I rms is the RMS amplitude of the AC current, R PHE and R AHE are amplitudes of the planar Hall and anomalous Hall effect respectively, B FL and B DL are the field-like (FL) and damping-like (DL) effective fields associated with the spin-orbit torque, α ANE is a term characterizing the parasitic contribution arising from the anomalous Nernst effect (ANE), β ONE is a term characterizing the contribution from the ordinary Nernst effect (ONE), and B eff = B ext − μ 0 M eff where M eff is the effective magnetization. Figure 5 a shows the angular dependence of at B ext = 0.1 T. We fit this to extract R PHE = 92 ± 4 mΩ as per Eq. ( 1 ). The relatively large uncertainty is due to a slight variation of R PHE for different applied fields. R AHE ≈ 3.1 mΩ was obtained by measuring R x y using an out-of-plane field as shown in Fig. 4 a. To obtain B FL and B DL , we fit the angular dependence of , as shown in Fig. 5 b, to extract the FL and DL + Nernst effect (ANE and ONE) contributions, and as per Eq. ( 2 ). Figure 5 c and d show the inverse field dependence of these contributions, whose fit allow us to extract B FL and B DL as −0.70 ± 0.03 mT and 30.5 ± 0.8 mT per J rms ≈ 3.5 × 10 6 A/cm 2 respectively. From the fit in Fig. 5 d we also obtain α ANE = 19 ± 3 nV and β ONE = − 40 ± 20 nV/T, indicating that the Nernst contributions and current heating are negligible.
To quantify the efficiency of spin-to-charge conversion in our system, we calculate the DL and FL spin torque efficiencies θ DL,FL as 37 where e is the electron charge, ħ is the reduced Planck’s constant, M s is the saturation magnetization, t LAFO is the thickness of the LAFO layer, and J Pt is the current density amplitude in the Pt. From this, we obtain θ DL = 0.57 ± 0.01 and θ FL = 0.013 ± 0.001. The large value of θ DL is consistent with the J c required for switching. We do however note that the second harmonic Hall technique has a tendency to overestimate the SOT efficiencies 39 . | Discussion
A recent study on a different Pt/FMI spinel system using Mg(Al,Fe) 2 O 4 (MAFO) found an above average damping-like SOT efficiency of θ DL ≈ 0.15 40 . However, the damping-like SOT efficiency in Pt/MAFO is still lower than that observed in Pt/LAFO, indicating that Pt epitaxy alone is insufficient to explain the exceptional charge-to-spin conversion efficiency in Pt/LAFO bilayers. One key difference between MAFO and LAFO is that a 2 nm thick magnetically dead layer at the MAFO film/substrate interface limits film quality in the ultra-thin regime 30 . Such magnetic defects can prevent the entirety of a film from being switched uniformly with magnetic field or current. In contrast, the minimal defects in LAFO/MGO makes it a much cleaner system for SOT switching. Bulk saturation magnetization values and the absence of dead layers at a few unit cells of LAFO allows the SOT to act on the entirety of a magnetically uniform, high quality ultra-thin LAFO film, and are contributing factors to the large θ DL . The high quality Pt/LAFO interface can also facilitate the efficient transfer of spin across it. For instance, it is known in YIG that a poor interface with the Pt results in poor spin transfer 41 .
In summary, we have demonstrated a promising new class of nanometer-thick low damping spinel ferrite thin films with highly efficient current-induced SOT switching. These LAFO films on MGO exhibit critical switching current densities as low as 6 × 10 5 A/cm 2 when interfaced with Pt and a damping-like SOT efficiency as high as θ DL ≈ 0.57. This superior performance was attributed to a combination of the excellent epitaxial quality of LAFO in the ultra-thin regime, the epitaxial growth of the Pt overlayer, and the high quality Pt/LAFO interface. LAFO also has been shown to have one of the lowest magnetic damping values of PMA FMIs to date, making it promising in numerous other applications. Altogether LAFO on MGO is the first demonstration of all of the desirable properties of low damping PMA materials in one material, and represents an unprecedented step towards the realization of a new type of spin-wave material platform for the next generation of spintronic devices. | Ultra-thin films of low damping ferromagnetic insulators with perpendicular magnetic anisotropy have been identified as critical to advancing spin-based electronics by significantly reducing the threshold for current-induced magnetization switching while enabling new types of hybrid structures or devices. Here, we have developed a new class of ultra-thin spinel structure Li 0.5 Al 1.0 Fe 1.5 O 4 (LAFO) films on MgGa 2 O 4 (MGO) substrates with: 1) perpendicular magnetic anisotropy; 2) low magnetic damping and 3) the absence of degraded or magnetic dead layers. These films have been integrated with epitaxial Pt spin source layers to demonstrate record low magnetization switching currents and high spin-orbit torque efficiencies. These LAFO films on MGO thus combine all of the desirable properties of ferromagnetic insulators with perpendicular magnetic anisotropy, opening new possibilities for spin based electronics.
Ferromagnetic insulators offer low magnetic damping, and potentially efficient magnetic switching, making them ideal candidates for spin-based information processing. Here, Zheng et al introduce a ferromagnetic insulator spinel, Li 0.5 Al 1.0 Fe 1.5 O 4 , with low magnetic damping, perpendicular magnetic anisotropy, and no magnetic dead layer.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41467-023-40733-9.
Acknowledgements
We thank Satoru Emori for helpful discussions and Jutta Schwarzkopf for a critical reading of the paper. This work was supported by the U.S. Department of Energy, Director, Office of Science, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Contract No. DESC0008505. S.C. and L.J.R. were supported by the Air Force Office of Scientific Research under Grant No. FA 9550-20-1-0293. J.J.W. was supported by the National Science Foundation under Award DMR-2037652. L.J.R. was also supported by an NSF Graduate Fellowship. H.R. was supported by Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C), an Energy Frontier Research Center funded by the US Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under Award DE-SC0019273. E.C. and A.D.K. were supported by the National Science Foundation under Award DMR-2105114. This research used resources of the Advanced Light Source, a U.S. DOE Office of Science User Facility under contract no. DE-AC02-05CH11231. X-ray diffraction was performed at the Stanford Nano Shared Facilities at Stanford University, supported by the National Science Foundation under Award No. ECCS-1542152. Krishnamurthy Mahalingam and Cynthia Bowers acknowledge funding support from the Air Force Research Laboratory under Awards: AFRL/NEMO: FA8650-19-F-5403 TO3 and AFRL/MCF: FA8650-18-C-5291, respectively. Research at NYU was supported by NSF DMR-2105114.
Author contributions
X.Y.Z., S.C. and Y.S. conceived of the research ideas; X.Y.Z., S.C., L.J.R. and J.J.W. and A.V. conducted X-ray and static magnetic characterization, AFM, FMR and transport measurements; K.M., C.T.B. and M.E.M. conducted TEM measurements; A.T.N. conducted XMCD measurements; E.C., H.R., and A.D.K sputtered Pt on the films; Z.G. supplied the MGO substrates; X.Y.Z., S.C. and Y.S. wrote the manuscript; Y.S. supervised the research.
Peer review
Peer review information
Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:00 | Nat Commun. 2023 Aug 15; 14:4918 | oa_package/22/91/PMC10427713.tar.gz |
|
PMC10474874 | 37662555 | Introduction
Mental illnesses are among the most debilitating medical conditions due to their prevalence ( Lewinsohn et al., 1998 ) and the vast toll they take on individuals ( Monkul et al., 2007 ) and society ( Leon et al., 1998 ). Yet the brain and the malfunctioning neural dynamics responsible for mental illness remain largely a mystery. Because of this, the National Institutes of Health is actively encouraging research ( Insel et al., 2013 ) to understand the brain at all levels of organization. Of the many types of electrophysiology commonly studied, our work focuses on understanding the dynamics of region-specific voltages in the brain, known as local field potentials (LFPs). Field potentials have proven to be good candidates for understanding dynamics because the learned features are predictive of neuropsychiatric conditions ( Veerakumar et al., 2019 ) and can generalize to new individuals ( Y. Li et al., 2017 ), allowing us to characterize and predict diseases using features that apply to the general population rather than a specific individual.
Recent developments have allowed neuroscientists to test hypotheses on neural dynamics by directly modifying the brain dynamics underlying actions and behaviours. For example, optogenetics ( Boyden et al., 2005 ) is a set of techniques that use a virus to modify the cells in specific brain regions to respond to light by firing action potentials. The neural dynamics can then be modified by shining light through implanted fibre optic cables into these modified regions, inducing aggregate cell behaviour that in turn affects the LFPs ( Buzsáki et al., 2012 ). The benefits of this development cannot be overstated. Neuroscientists can now use experimental manipulation to argue for the causality of cell types in behaviour, rather than mere correlation. But more importantly, it opens the door for targeted neurostimulation treatments of mental illnesses. Many of the current treatment methods for psychiatric disorders rely on drugs, which can have debilitating side effects ( Short et al., 2018 ) due to their broad distribution in the brain and the low temporal resolution at which they impact brain cells. Direct manipulation of the faster dynamics responsible for the illness has the potential to cure these diseases without the negative consequences of medication.
However, we must move beyond mere predictive models to obtain the necessary insight into causal mechanisms for such interventions to succeed. For stimulation to be effective we must find a neural circuit responsible for the observed dynamics ( Kravitz et al., 2010 ) rather than just finding predictors that correlate with the behaviour. Such collections of neural circuits, heretofore referred to as networks, inferred by a statistical method must be biologically relevant and provide enough information about brain function to find targets for stimulation. This is analogous to creating a harmony in an orchestra: one must be able to identify the out-of-tune instrument rather than merely detecting the errant notes produced.
This analogy of an orchestra is particularly apt for neuroscience where it is commonly assumed that a small number of these unobserved networks give rise to the high-dimensional dynamics ( Medaglia et al., 2015 ). This viewpoint has often led researchers to use factor analysis as a method for understanding brain activity networks. Many factor models have been created to incorporate varying assumptions about the latent networks and resulting dynamics, including independent component analysis (ICA) ( Du et al., 2016 ), non-negative matrix factorizations (NMF) ( Lee & Seung, 1999 ), and principal component analysis (PCA) ( Feng et al., 2015 ). Some factor models have been developed to specifically identify possible locations where stimulation would be effective ( Gallagher et al., 2017 ).
Our objective is to find a brain network in a mouse model associated with two traits: being in a stressful state and having a genotype that has been linked to bipolar disorder, Clock- 19, compared to wild-type littermate controls using the data described by Carlson et al. (2017) . Mice with the Clock- 19 genotype have been shown to be resilient to stressful situations ( Dzirasa et al., 2011 ; Roybal et al., 2007 ; Sidor et al., 2015 ), so understanding how mice with and without the genotype differ can assist in understanding of stress response networks. The data in this setting are LFP recordings from both Clock- 19 and wild-type mice in a variety of situations ranging from not stressful to highly stressful. It is often a scientific goal to find a single network that differentiates between mice in different groups. This goal necessitates that the predictive information is located in a single factor, and this factor must be interpretable to neuroscientists in order to find candidate locations for stimulation. Manipulation of the dynamics will not be possible unless all these conditions are met.
Factor models that focus exclusively on modelling the electrophysiology data often fail to find networks that are predictive of traits of interest (i.e., stress condition, genotype). This suggests using joint factor models that assume common latent variables underlying both electrophysiology and traits. Unfortunately, joint factor models place a very strong prior on the predictive coefficients that can substantially degrade predictive performance ( Hahn et al., 2013 ), particularly if the number of estimated factors is smaller than the number of true factors. The true number of networks responsible for neural dynamics are numerous, certainly more than the number of factors we can reliably estimate. In addition, most of these networks are unrelated to the traits of interest. These irrelevant networks can be particularly dominant, for example those related to motion ( Khorasani et al., 2019 ) or blinking ( Joyce et al., 2004 ). One potential solution is to increase the weight on the predictive component when fitting the joint model. However, we find that although such an approach can do well in-sample, it fails to accurately infer predictive networks for test subjects based on information on electrophysiology data alone and hence cannot address our goals.
Supervised autoencoders (SAEs) ( Ranzato & Szummer, 2008 ) have arisen as an alternative to classical joint factor models. An autoencoder is a method for dimensionality reduction that maps inputs to a lower dimensional space using a neural network known as an encoder. The latent space is then used to reconstruct the original observations by a neural network called a decoder ( Goodfellow et al., 2016 ). Both the encoder and the decoder are learned simultaneously to minimize a reconstructive loss. Supervised autoencoders add another transformation from the latent factors to predict the outcome of interest, thus encouraging the latent factors to represent the outcome well ( Xu et al., 2017 ). These have been used with great success in image recognition, especially in the context of semi-supervised learning ( Le et al., 2018 ; Pezeshki et al., 2016 ; Pu et al., 2016 ). In this work, we show how SAEs can be adapted to solve a difficult problem in neuroscience: finding a single network suitable for stimulation to modify a trait. Finding this predictive network has been a difficult problem in neuroscience, as experimental predictive ability has failed to match theoretical expectations with traditional inference techniques. We show empirically on synthetic data that model misspecification in generative models is a substantial contributor to these difficulties. We then demonstrate on both synthetic data and our experimental data set that our SAE-based approach is able to successfully identify such a predictive network, even under substantial misspecification. Interpreting this network leads to natural conclusions on potential stimulation targets. Previous studies have experimentally modified this target to successfully modulate animal behaviour ( Carlson et al., 2017 ). Together, these contributions provide substantial evidence for the promise of our method as a useful tool for experimentalists to develop stimulation methods as potential treatments for mental illness. Notably, the proposed methodology has been used to successfully design a targeted neurostimulation protocol ( Block et al., 2022 ; Mague et al., 2022 ), providing further evidence of this claim.
This paper is organized as follows: Section 2 contains a description of the data and motivation, along with defining joint factor analysis and demonstrating drawbacks under model misspecification. Section 3 derives our SAE approach while considering some basic properties and issues to motivate modifications of previous SAE frameworks. Section 4 provides two synthetic examples to illustrate the benefits of SAEs, one with a standard NMF model and another using synthetic LFPs. Section 5 shows that our approach learns predictive networks of genotype and stress, describes how the inferred ‘stress network’ could be modified through stimulation, and relates the network to previous literature. In Section 6 , we provide concluding remarks and disucss further directions for research. Code for reproducing these results can be found at https://github.com/carlson-lab and the LFP data is publicly hosted at the Duke University Research Data Repository ( Carlson et al., 2023 ). | Conclusions
We have developed an approach for finding a single stimulatable brain network that correlates with a condition or behaviour. Our strategy is based on supervised autoencoders, which are more conducive to finding a single network predictive of an outcome of interest than joint factor models. We are able to find a brain network that is correlated with stress and a network that is correlated with a genotype linked to bipolar disorder. This approach naturally leads to candidate brain regions for stimulation to modify the network.
While the objective of this work is developing an approach for identifying relevant brain networks, we also provide broad conceptual contributions to understand why, how, and when SAEs are effective model choices. This is done by deriving SAEs in a manner that elucidates the source of superior predictive ability relative to joint factor models, and using analytic solutions in a simple case to illustrate why carefully designed SAEs are useful in our applications. | Conflict of interest All authors declare that they have no conflicts of interest.
Abstract
Targeted brain stimulation has the potential to treat mental illnesses. We develop an approach to help design protocols by identifying relevant multi-region electrical dynamics. Our approach models these dynamics as a superposition of latent networks, where the latent variables predict a relevant outcome. We use supervised autoencoders (SAEs) to improve predictive performance in this context, describe the conditions where SAEs improve predictions, and provide modelling constraints to ensure biological relevance. We experimentally validate our approach by finding a network associated with stress that aligns with a previous stimulation protocol and characterizing a genotype associated with bipolar disorder. | Data and model
In this section, we introduce the electrophysiological data analysed in this paper and the scientific motivation for a latent variable model. We highlight the need for a single latent variable to predict the experimental outcomes; here, these outcomes relate to an animal model of stress and a genotype associated with bipolar disorder. As part of this section, we highlight the importance of robustness to certain types of model misspecification (here, primarily latent dimensionality), which are nearly ubiquitous in neuroscience. Unfortunately, typical inference strategies for latent variable models are not robust to such misspecification, as we show in simulations. We show misspecification has a strong deleterious effect on previous approaches to learn predictive factor models.
Electrophysiology: the tail suspension test
The LFPs analysed in the tail suspension test (TST) came from 26 mice of two genetic backgrounds (14 wild type and 12 Clock- 19). The Clock- 19 genotype has been used to model bipolar disorder ( van Enkhuizen et al., 2013 ). Each mouse was recorded for 20 min across 3 behavioural contexts: 5 min in its home cage (non-stressful), 50 min in an open field (arousing, mildly stressful), and 100 min suspended by its tail (highly stressful). Data were recorded from 11 biologically relevant brain regions with 32 electrodes (multiple electrodes per region) at 10,000 Hz. These redundant electrodes were implanted to allow for the removal of faulty electrodes and electrodes that missed the target brain region. We chose to average the signals per region to yield an 11-dimensional time series per mouse. This was done because the brain region represents the smallest resolvable location when modelling multiple animals; multiple electrodes function as repeated measurements and averaging allows us to reduce the variance of the measured signal. A visualization of these data and the experimental design can be seen in Figure 1 .
We want to determine a single brain network that predicts stressful activity, so we consider all data from the home cage as the negative label (stress-free condition) and all data from the other two conditions as the positive label (stressed condition). A second prediction task is to determine brain differences between the genetic conditions (i.e., what underlying differences are there between the wild type and bipolar mouse model?). There is strong evidence to support the belief that the behaviourally relevant aspects of electrophysiology are frequency-based power within brain regions, as well as frequency-based coherence between brain regions ( Hultman et al., 2016 ; Uhlhaas et al., 2008 ). We discretized the time series into 1-s windows to model how these spectral characteristics change over time. Windows containing saturated signals were removed (typically due to motion artefacts). While there are methods that can characterize the spectral features on a continuous time scale ( Prado & West, 2010 ), the behaviour we deal with changes over a longer scale than the observed dynamics. Consequently, it is more effective to discretize and obtain sharper spectral feature estimates ( Cohen, 1995 ) that are more amenable to factor modelling.
We chose to extract the relevant quantities from the recorded data prior to modelling rather than extracting spectral features in the modelling framework for simplicity; the extra modelling step would substantially increase the number of parameters in the model. The features related to power were computed from 1 to 56 Hz in 1 Hz bands using Welch’s method ( Welch, 1967 ), which is recommended in the neuroscience literature ( Kass et al., 2014 ). We chose 56 Hz as a threshold to avoid the substantial noise induced at 60 Hz from the recording equipment, as prior literature has demonstrated that much of the meaningful information is contained in the lower frequencies. We calculated mean squared coherence at the same frequency bands for every pair of brain regions ( Rabiner & Gold, 1975 ). This procedure converted each window of data into 3,696 non-negative power and coherence features. Figure 2 shows two LFPs and the associated features calculated from the recordings.
Difficulties with joint factor models
After extracting the relevant quantities described above, we have high-dimensional data (which scales by the square of the brain regions and number of frequencies considered). A natural approach for characterizing the data is to use latent variable models. These models posit that an unobserved latent space (factors or in the neuroscience field ‘networks’) generates the observed dynamics, with the mapping between the two (loadings) explaining the correlations in the observed covariates. As the dimensionality of the latent space is often far lower than the observed space, this allows for efficient representation of the correlations in the data ( Bishop, 2006 ). The covariates in neuroscience often have strong correlations, so it is unsurprising that latent variable models have been extensively used ( Bassett & Sporns, 2017 ). The latent representation must also contain information relevant to the outcomes relating to stress and genotype.
There is a vast literature of methods that reduce dimensionality while retaining predictive information. One widely studied approach is sufficient dimensionality reduction (SDR), which exploits the assumption that the response is conditionally independent of the predictors given a projection of the predictors into a lower dimensional subspace. Examples of this technique include sliced inverse regression ( K.-C. Li, 1991 ) and extensions ( Fukumizu et al., 2009 ; Wu et al., 2009 ). However, these approaches require stringent assumptions on the data and the outcome, which limits their utility in neuroscience. In particular, these models induce a strong prior on the predictive coefficients, which leads results to be sensitive to model misspecification. This difficulty can be seen particularly clearly with misspecification of the latent dimensionality ( Hahn et al., 2013 ).
Joint factor models are appealing modelling choices to incorporate information about both the covariates and the outcomes, as demonstrated by the popularity and use of supervised probabilistic PCA ( Yu et al., 2006 ), adding supervised variables in highly related topic models ( McAuliffe & Blei, 2007 ), and the use of the concept in deriving supervised dictionary learning algorithms ( Mairal et al., 2009 ). We now formally introduce these models. To define our notation, let be the measured dynamics (predictors) and be the corresponding trait or outcome. The latent factors are denoted , Θ are the parameters relating the factors to , and Φ are the parameters relating the factors to y . To handle intractable integrals involved in marginalizing over the latent factor distributions ( Gallagher et al., 2017 ), two common approaches are to optimize a variational lower bound similar to a variational autoencoder ( Kingma & Welling, 2013 ) or alternatively to estimate the factors jointly with the model parameters ( Mairal et al., 2009 ). We choose to proceed with the latter, due to its previous usage in similar applications.
Given this strategy, a generic objective function for a joint factor model is
The tuning parameter μ controls the relative weight between reconstruction and supervision. Including μ is somewhat atypical, and we note that setting recovers the log-likelihood of a standard joint factor model. In practice, it may be important to modify μ to upweight the importance of prediction by setting . This term can be thought to correspond to a fractional posterior , which has been previously used in robust statistics ( Bhattacharya et al., 2019 ), and is highly related to the common practice of modifying the noise variances on and y in joint factor models ( Yu et al., 2006 ). This tuning parameter is frequently used in machine learning ( Mairal et al., 2009 ). Given this objective function, we find the optimal values of the latent factors, Θ, and predictive coefficients Φ. This is commonly done with stochastic methods due to their computational advantages in large data sets.
However, joint models are not without their own drawbacks, as we now demonstrate using analytic solutions with an loss on a misspecified model. Furthermore, as motivation for our approach, we show that SAEs are not as affected by this type of misspecification. We generated a synthetic data set corresponding to supervised probabilistic PCA ( Yu et al., 2006 ), a simple joint linear model. We set the number of predictors to 20 with a single outcome and a latent dimensionality of 3. The largest factor was unpredictive of the outcome. Additionally, we limited the inferred dimensionality to 2. This mimics our statistical models that may under-specify the true number of brain networks. We fit a linear joint model and an SAE, which we will fully describe in Section 3 , with two components over a range of supervision strengths using the analytic solutions and show the results in Figure 3 . At low supervision strengths (large values of μ ), both the joint model and SAE focus on reconstructing the data. At higher strengths (small values of μ ), they sacrifice some of the reconstruction for better predictions of the outcome. However, the joint model simply overfits the factors by making them overly dependent on y , leading to an effect we refer to as ‘factor dragging’. We define this as
where and represent the mapping to s when both and y are observed and when just is observed, respectively. This represents the difference between the factor estimates when the outcome is known (training) and when the outcome is unobserved (testing). For a joint factor model in this set-up, this corresponds to and . When the latent space is dedicated to reconstructing , this difference will be small. However, as μ is increasingly small, the values of will be largely influenced by the predictive loss on y when it is observed, and this difference will become increasingly substantial when y is not observed. This is particularly relevant to our applications, as the dimensionality of often corresponds to between 3,000 and 10,000 features. In order to influence the latent space, we often must roughly ‘balance’ the reconstructive loss and predictive loss, corresponding to a value of . The rightmost plot of Figure 3 shows this pattern, which is increasingly large with strong supervision. Large discrepancies will lead to poor predictions, as it indicates that the absence of knowledge of the outcome y (inherent to prediction) dramatically affects the latent representation, and by extension the reconstruction/prediction. In contrast, our SAE approach, which uses the same map from regardless of whether y is observed, circumvents these issues and is broadly applicable as we fully explain in Section 3 . Some efforts have been made to overcome these known issues, particularly task-driven dictionary learning ( Mairal et al., 2011 ). While task-driven dictionary learning addresses this specific issue, it is developed specifically as a matrix decomposition method and lacks the flexibility as to be incorporated in many neuroscience models, such as Gaussian process-based models ( Gallagher et al., 2017 ). While these models are not explored in this work, our approach naturally generalizes to these models.
There have been efforts to fix this problem, such as ‘cutting-the-feedback’ approaches ( Liu et al., 2009 ; McCandless et al., 2010 ; Plummer, 2014 ). In Markov chain Monte-based methods with this approach, the samples of the latent variables are drawn using only the electrophysiology (hence the influence from the outcome is cut). The corresponding maximum likelihood technique fits a factor model exclusively on the electrophysiology and then fits a predictive model using the estimated factors. The problem of cutting-the-feedback is immediately apparent; dominant networks are often associated with motion, maintaining homeostasis, or other irrelevant behaviours. The latent space will likely not contain information relevant to prediction when the outcome is ignored ( Jolliffe, 1982 ).
An alternative solution for factor models is to substantially increase the number of latent components. This will capture the relevant information, but the information will be scattered across multiple networks. This is particularly harmful in our application, as the latent network itself is used to choose stimulation targets ( Carlson et al., 2017 ). If the predictive information is located in a single network, neuroscientists can choose regions that are active and well connected in the predictive network to stimulate or run downstream tests on a single variable with higher statistical power. If the generative model is a good representation of the neural dynamics, this will result in a global stimulation of the predictive network resulting in altered behaviour. However, if the predictive information is scattered across multiple networks, it is impossible to identify which connected regions of the networks will result in altered behaviour. We will show that all these issues arise when predicting stress and genotype using electrophysiology on the TST data set, and that our SAE-based approach addresses these issues, resulting in substantially improved performance.
An SAE-based approach
We now introduce a novel SAE-based approach to learn a single predictive latent factor, and discuss reasons for superior performance in practice. We then derive analytic solutions for SAEs in a linear model. These solutions are used to contrast the behaviour of SAEs to traditional inference techniques in an illustrative example. Finally, we analyse appropriateness of SAEs in biological applications based on common assumptions.
Why do SAEs yield superior predictions compared to joint factor models?
Given a new sample , prediction of given is straightforward once Θ and Φ have been estimated; simply compute point estimates of the factors and outcome as and . However, these predictions are often quite poor. This issue does not stem from the inference method, as the example from Section 2 maximized Θ and Φ directly and yet still yielded poor predictions. Instead, the issue is due to the model attempting to explain all the variance of the outcome using the latent factors ( Hahn et al., 2013 ). If there is a high amount of weight on predictive performance, a joint model will discover that the best predictor of y is y itself. This influence is most keenly felt when L is small, as it is difficult to force the model to prioritize prediction of y without increasing the dependence on y for proper estimation of .
While there are a plethora of methods for selecting L , such as cross-validation or carefully designed priors ( Bhattacharya & Dunson, 2011 ), these methods will be largely unhelpful in improving predictive performance. These methods value a parsimonious representation of the joint likelihood, corresponding to fewer latent factors. This conflicts with a prior focussed on predictive ability which would encourage more factors. While cross-validation of the predictive accuracy over the dimensionality seems like an appealing option, in practice it still yields subpar predictions as compared to a purely predictive model and has difficulty finding a single predictive latent factor.
SAEs are an alternative to joint models that modify a deep feedforward network to include an autoencoder in order to reconstruct the predictors. Let us define the predictive feedforward network as , where is the encoder, and the reconstructive autoencoder is a composition of the encoder and decoder . For estimation, it is common to place losses on the reconstructive and predictive parameters as well as the latent factors as regularization. This yields an objective function,
To maintain the desired interpretability of joint models, we choose the parameters and losses to correspond to the parameters and negative log likelihoods from the joint model. An SAE learns a mapping from to that is used to predict y due to the supervision loss, incorporating the relevant information during training. However, this mapping is only dependent on in contrast to joint models which depend on both variables as inputs. Once the SAE is estimated, at test time alone is used in prediction and reconstruction of the electrophysiology. Because of this, SAEs are limited to modelling the variance in y that can be predicted by . From this point of view, the reconstruction loss can be considered a deep-learning version of regularization discussed in Hahn et al. (2013) , and was explicitly motivated as such in Le et al. (2018) . In these situations, the reconstruction loss is placed on the latent space functions as a method to reduce the complexity of the model, corresponding to a structural risk minimization (SRM) approach ( Vapnik, 1992 ). This technique has been used successfully to regularize complex models ( Guyon et al., 1992 ; Kohler et al., 2002 ). However, there is one substantial difference between our approach and a typical SRM approach. With an SRM, the complexity penalty shrinks with an increasing number of observations (corresponding to ). In SAEs, μ is a fixed constant, as it is critical that the factors maintain biological relevance even with large numbers of observations.
When are SAEs an appropriate predictive model?
The previous section makes it clear that, relative to joint models, SAEs improve predictive performance and reduce the consequences of misspecification of the true latent dimensionality. However, it is difficult to intuitively see what these differences are, as most formulations of Equations ( 1 ) and ( 3 ) do not have analytic solutions and are learned with stochastic methods ( Bottou et al., 2018 ). We develop novel analytic solutions when the likelihood is replaced with an loss. This yields a form similar to PCA, which can be considered the limiting case of probabilistic PCA as the variance of the conditional distribution vanishes ( Bishop, 2006 ).
We assume that both the predictors and outcome are demeaned. For convenience, we will define matrix forms of the data and factors: , , and . We let the model parameters and for matrices , . For the associated SAE model, we assume a linear encoder, .
The solution for the concatenation of W and D of the joint factor model, , can be found as an eigendecomposition of the matrix
When Y is known, the factor estimates can be computed using the fixed-point equation detailed in the Appendix. When the outcome is unknown, the factor estimates are simply the projection of the data onto the latent space, . This solution form is well known from PCA of the predictors and outcome jointly, except for the minor extension with μ .
The solution for the concatenation of W and D under the SAE formulation as defined above is also an eigendecomposition with a slightly different matrix
where is a projection matrix onto X . The estimate for A can be computed via a fixed-point equation. We provide the details of the fixed-point equations and the derivations in the Appendix. The important aspect to note is that the SAE only models the variance of that it is linearly predictable by due to the term . Thus, given that the latent space is only computed with , this formulation forces the model to find predictive factors using only the predictors.
We can use these analytic formulae to examine the distributional assumptions under which SAEs provide good predictive models. Previous work by Le et al. (2018) showed that a reconstruction loss added to a predictive model provided a bound on the generalization error in linear SAEs via sensitivity analysis ( Bousquet & Elisseeff, 2002 ). This corresponded to predictive gains with more common deep networks. However, bounds provided by sensitivity analysis can be loose ( Zhang et al., 2017 ) and ignore properties of the data generating process ( Bu et al., 2020 ). This led to a claim that reconstruction penalties empirically never harm performance. While this is often true in practice, an intuitive counterexample to this claim is where the latent structure is not highly related to the outcome. The reconstruction loss will force the model to focus on high-variance predictors unrelated to the outcome, harming predictive performance. On the other hand, having the predictive information correlate with high variance latent factors yields substantial improvements, as estimates of high variance components can converge with small numbers of samples ( Shen et al., 2016 ). Therefore, it is important to evaluate whether a data set is likely to benefit from an SAE-based approach before analysis.
We empirically explore the distributional assumptions of SAEs on random matrices using the analytic solutions previously derived. Specifically, we show that SAEs are beneficial when the predictive information lies on a low dimensional manifold, especially one that correlates strongly with high-variance components. To demonstrate the first claim, we generated 200 samples of predictors with zero mean and unit variance with a single latent component explaining 70% of the variance. The outcome was generated as . By allowing λ to vary, we can control how important the low-dimensional manifold is in predicting the outcome, as shown on the left of Figure 4 . SAEs are largely ineffective when the low-dimensional manifold is largely irrelevant for predictions (small values of λ ). However, the performance dramatically improves as the manifold becomes increasingly influential (large values of λ ). Models that place high value on the reconstruction loss are more effective at taking advantage of this low-dimensional structure (high values of μ ).
To demonstrate the second claim, we generated data with zero mean and unit variance such that . The outcome was generated as , using exclusively the second latent factor. By varying λ , we can explore the importance of having predictive information align with high variance components in the predictors, as is shown on the right of Figure 4 . The models that place high value on the reconstruction (larger values of μ ) largely ignore the predictive information when it is correlated with little variance in the predictors, while the model that emphasizes prediction overfits (smaller values of μ ). However, with slight increases in predictive variance, the model with small values of μ is quicker to incorporate information relevant to the outcome. Finally, when most variance in the predictors aligns with the predictive component, the models that place higher emphasis on reconstruction exploit the regularization most effectively and have the lowest generalization error.
The sparsity-in-dimensionality assumption of SAEs aligns better with network-based neuroscience than an alternative sparsity assumption in terms of the predictors ( Tibshirani, 1996 ). Many of our predictors are highly correlated, such as power in adjacent frequency bins in the same brain region. It is likely that both predictors will be similarly predictive of the outcome. LASSO tends to shrink the coefficient for the slightly less useful predictor to 0. An SAE would ideally find the network responsible for the variance observed in both predictors and relate this latent network to the outcome. Given the strong evidence in favour of network-based neuroscience and previous experiments demonstrating the relevance of our analysed features, it is reasonable to expect that SAEs will yield improved predictive performance relative to purely predictive or joint models when the relevant networks are moderately-to-highly expressed, but are potentially detrimental when the network has comparatively low expression.
How does predictive sparsity impact SAEs?
The SAE approach as previously defined only ensures a predictive latent subspace. However, our applications require that a single predictive network be learned, while the remaining networks model less relevant dynamics. In related settings, it is common to place a hard sparsity constraint on the parameters Φ, limiting the predictive influence to a single (or small number) of factors ( Gallagher et al., 2017 ). This can lead to undesirable consequences when minimizing the loss in Equation ( 3 ) when the decoder involves a difficult-to-optimize objective ( Ulrich et al., 2015 ). In particular, one locally optimal solution is to use the predictive factors to highly overfit the outcome, while the associated loadings vanish. This corresponds to a latent space factorized into a predictive and generative model; the predictive factor has no biological significance as it explains none of the dynamics, while the unpredictive factors model the electrophysiology with one-fewer latent dimensions. This local optimum is not apparent with the linear models due to more stable training properties. However, with more complex models, such as the Gaussian process model of Gallagher et al. (2017) , this is a common problem.
We address this issue using an SAE approach by replacing the penalization terms on the loadings and the latent factors with a normalization constraint on the loadings for each factor. Let be the loadings associated with factor l . Then, our objective function corresponding to Equation ( 3 ) is replaced with an objective function
where the non-negativity constraints are applied elementwise. For convenience we chose for each factor. However, the choice is arbitrary; latent variable models inherently lack multiplicative identifiability. If the latent variables are multiplied by a constant k and the loadings by , the new reconstruction will remain unchanged. This is not an issue for selecting a stimulation target, as we choose targets based on relative importance. Incorporating these constraints is straightforward with modern packages. Specifically, we define hidden unconstrained variables , and then set the loadings as a transformation of these hidden variables such that fulfils the constraint. In our case, we used the softplus function, , which smoothly maps from the full real line to the positive reals. This unconstrained optimization with a transformation is straightforward to implement, particularly with automatic differentiation. Specifically, by placing the softplus rectification as the final step, the remaining parameters are able to be learned without constraints. The objective in Equation ( 6 ) can be learned with respect to Φ, , and , rather than optimizing the loadings directly. Other options, such as learning an unconstrained A and projecting into the feasible (non-negative) region, would require specialized implementations.
In SAEs with deep decoders, implementing this type of identifiability constraint is not straightforward. However, our application requires that the SAE correspond to a joint model. Thus, the decoder corresponds to a shallow network, and there are natural normalization constraints on the associated loadings. In this model, a natural constraint on the NMF loadings is to require that the norm of each factor be equal to a constant. We enforce this constraint by first learning unconstrained variables and then dividing by the norm to obtain the loadings after the model is learned.
This normalization constraint also possesses a scientific justification. As a single factor is used to make predictions, we can choose targetable features based on the importance of the supervised factor loadings relative to the other factor loadings (e.g., the features uniquely important to the supervised network). This relevance is evaluated by dividing each entry in the supervised factor loadings by the sum of that entry in all factor loadings. The features with the highest ratios in the supervised network are potential candidates for stimulation. Additionally, as the network associated with our target behaviour is often relatively small in variance (hence the initial need for SAEs), normalizing the loadings for each factor to a constant provides a convenient method to ensure that the supervised network is not deemed irrelevant due to smaller loadings, as would be the case with penalization-based identifiability constraints.
Validation of our approach with synthetic data
We now provide two synthetic examples that demonstrate that our SAE-based approach can recover a single predictive factor that is robust to misspecification. In the first example, we generate data from a known NMF model and show that our SAE-based approach can recover a predictive factor under misspecification in the latent dimensionality. In the second demonstration, we generate synthetic LFP dynamics using an alternative latent variable model ( Gallagher et al., 2017 ) and extract a predictive factor using our NMF model. The first example validates two aspects: first, that the estimated factor is predictive and second, that the estimated loadings for the SAE match the true loadings. The second example validates that our NMF approach can robustly extract predictive factors in a realistic simulation of experimental conditions. In both examples, we show that alternative approaches (cutting-the-feedback and supervised dictionary learning) fail under our experimental conditions.
Recovering a synthetic NMF component
We generated synthetic data such that the latent factors and the conditional distribution of the observations given the factors had truncated normal distributions. This aligns with the likelihood assumed with an loss and non-negativity constraints. We chose a latent dimensionality of 10 and a 100-dimensional observation space to match the assumption that the number of networks is substantially lower than the observed dimensionality. The latent factors were independent with distinct variances. The outcome came from a Bernoulli distribution where the probability depended exclusively on the lowest-variance component. The data generation process can be written as
We fit an NMF model with five components to match the assumption that the number of estimated factors is smaller than the true number of networks. We evaluated three methods for inferring the model, cutting-the-feedback, joint modelling, and our SAE-based approach. For each model we maintained identical identifiability constraints on the loadings corresponding to Equation ( 6 ). We measured the predictive performance of the learned factor using the area under the curve (AUC), which can be interpreted as the probability of the predictive network being higher in a positive sample as compared to a negative sample. This was evaluated on a test set. We also quantified the accuracy of the learned loadings as the cosine similarity between the true and estimated loadings. The cosine similarity between two vectors and is defined as
This quantity measures how well the two vectors align with 1 indicating perfect alignment and 0 indicating orthogonality. The cosine similarity allows us to answer how similar different brain networks are. Highly similar networks will involve similar regions with similar covariates. As such, they will largely align with cosine similarities close to 1. On the other hand, networks that involve different covariates will have lower cosine similarities. The permutative non-identifiability is accounted for in the SAE and joint models by the fact that a single component is supervised, so similarity is measured between the supervised loadings and the true loadings. In the joint NMF, this similarity is computed between the loadings associated with the most predictive factor and the true loadings.
Figure 5 and Table 1 show the results of the predictive AUC of the single estimated predictive factor, along with the cosine similarity between the estimated loadings and the true network characteristics. For comparison we have included the values for the true model and logistic regression. The latter provides an estimate of the predictive ability of a standard linear model. Cosine similarity ranges from 0 to 1, and for the specific generated data set the cosine similarity between two randomly learned loadings is . Values larger than this indicate that the model successfully incorporated information from the true model into the estimated network.
The factor estimated via cutting-the-feedback contains no predictive information and does not align with the true network. This is unsurprising, as none of the estimated factors using reconstructive loss alone will contain information related to a low-variance factor as defined. More interestingly, the joint factor model provides no benefits over a cut model. This seems surprising as incorporating information related to the outcome during estimation should improve predictive performance. However, the figure on the right shows why this is not the case. The AUC on the training set of the joint model is far higher than the SAE, and higher than the theoretical bound provided by the true model. However, predictive ability returns to random chance when the factors are estimated in the test set without knowledge of the outcome. This is the ‘factor dragging’ problem stated in Section 2 . The model overfits the predictive factors without modifying the corresponding loadings, making prediction dependent on knowledge of the outcome. Our SAE approach avoids this overfitting and accurately characterizes the network.
Extracting predictive features in synthetic LFPs
We now validate our SAE-NMF approach using synthetic LFPs to match our experimental conditions as closely as possible. The data were generated using a previously developed factor model specifically designed for analysing LFPs, referred to as Cross-Spectral Factor Analysis ( Gallagher et al., 2017 ). These models use Gaussian processes in a multiple kernel learning framework to represent the spectral features in a low-dimensional space of latent factors. This approach functions as a generative model given a prior on the latent factors.
We initialized a CSFA model with 30 latent components and 8 measured brain regions and generated 20,000 samples of synthetic LFPs using the sampling method described in Section A.6. The draws associated with a single latent factor were used to generate a synthetic binary outcome. The associated power spectrum of a subset of the regions is plotted in Figure 6 . This data set reflects many of our assumptions of brain dynamics; the latent variables have a substantial amount of sparsity, a small number of latent variables are responsible for the outcome of interest, and the spectral features associated with the latent variable are sufficient to characterize the brain network.
We then calculated the power and coherence features of the synthetic data using the same procedure described in Section 2 . This allows us to examine the performance of our approach under realistic circumstances. We compare our supervised approach to a logistic regression model using the observed features, a sequentially fit NMF model, and a joint model. Each NMF model was estimated with only 5 latent factors instead of the 30 factors used to generate the data. Therefore, our example demonstrates the effect of misspecification, not only in the latent dimensionality but also in the observational likelihood of the data under realistic assumptions.
The results are shown in Table 2 . The cut model fails to capture any relevant information to the outcome, which we attribute to the substantial misspecification. Correspondingly, the joint model also fails to yield accurate predictions as it predicts nearly perfectly on the training data but does not incorporate that information into the associated loadings. However, our SAE approach is able to discover relevant features for prediction. In fact, it slightly improves on the pure logistic regression model aligning with the theoretical results of Le et al. (2018) .
The results of these two synthetic examples suggest that our approach will be able to accurately recover the relevant dynamics associated with the outcome of interest and be robust to the types of misspecification that can be reasonably expected when analysing LFPs.
Estimating networks associated with stress and bipolar disorder
We have demonstrated why and how our SAE approach can yield improved predictive performance in synthetic LFP data. Furthermore, our approach can accurately estimate the associated network characteristics in a known model and handle misspecification of both the latent distribution and observational likelihoods. We now demonstrate the predictive abilities of SAEs on the TST data set. We use an NMF model with a single supervised factor to force all the predictive information into a single brain network. Our objective is to find a single network predictive of stress and a single network predictive of genotype. We fit a model with 10 factors and a set supervision strength of . The complete details of the implementation and the features are provided in the Appendix.
We compared the results from our approach to the two most relevant competitors, a joint factor model with a single supervised factor and ‘cutting-the-feedback’ with different numbers of latent factors. Since our objective is to find a single predictive network, we tested two measures of predictive ability when comparing to cutting-the-feedback. The first was the predictive ability using the entire latent subspace and the second was the predictive ability using the single most predictive factor on the training set. One method for improving the predictive performance with cutting-the-feedback approaches is to increase the number of latent factors, so we compare the performance of a 25-factor model in addition to the corresponding 10-factor model. We also compared the reconstructive losses to ensure biological relevance of the learned model.
Table 3 shows the predictive and reconstructive metrics using all stated models for predicting stress, along with 95% confidence intervals. It is apparent that our SAE-based approach is substantially better than cutting-the-feedback or fitting a joint factor model in predictive ability. Our approach with a single factor achieves an AUC of relative to for both the cut model and joint model. Even with more factors, a cut model approach still yields worse predictions. In fact, increasing the latent dimensionality can make the prediction using a single factor worse as the relevant information becomes divided between an increasing number of factors. The reconstruction losses were better for the cut and joint models at and , respectively, which is unsurprising as the encoder constrains how well the SAE can adapt the factors to each observation. However, the SAE still explains a substantial portion of the variance with a reconstruction loss of compared to loss from using the mean, indicating that the estimated factors are still biologically relevant.
Choosing the optimal number of factors is a challenging problem. We note that several approaches exist to select the dimensionality in factor models, but they do not naturally apply in our context. For example, there are a number of Bayesian hierarchical methods that define priors for helping choose the dimensionality ( Bhattacharya & Dunson, 2011 ) or penalization-based schemes to balance generative capability with complexity ( Minka, 2000 ). It is relatively common in machine learning to choose the number of components based on heuristics of explained variance ( Ferré, 1995 ) or by choosing the number of components through a cross-validation procedure. However, it is non-trivial to extend these existing techniques to the needs of our application, as it is challenging to include the hierarchical priors in the SVAE formulation, and a focus exclusively on prediction does not lend itself to finding a system that reconstructs the data well and can be explained as networks. Instead, for this application we chose 10 factors to match previously learned models ( Gallagher et al., 2017 ). This allowed for the generative factors to account for the baseline variance while not rendering the predictive factor irrelevant in the reconstruction of the electrophysiology.
The sign of the predictive model associated with the supervised factor in the SAE was consistently negative across all the data splits. This indicates that the learned network was less active in stressful conditions as compared to a non-stressful environment. Figure 7 shows one method for visualizing the network. Each segment along the edge represents power in the labelled brain region. The ‘spokes’ in the circle represent coherence between the two specified regions. Stimulation targets are chosen as region/frequency combinations that are particularly influential in the supervised network. These influential combinations will have large loadings in spectral power and coherences between that region and other brain regions. By stimulating this ‘central’ region, we can influence the entire network and its associated behaviour ( Mague et al., 2022 ).
In this particular case, the network negatively associated with stress is characterized by power in the low frequencies of 1–4 Hz in the prefrontal cortex (IL_Cx and PrL_Cx), thalamus (Md_thal), amygdala (BLA), and ventral tegmental area (VTA) along with high levels of coherence between these regions. It also is characterized by coherence at these frequencies within the hippocampus (mDHipp and IDHipp). Since this network is lower in the stressful situation, stimulation in the prefrontal cortex or VTA at low frequencies should modify the dynamics in a stressful situation to more closely align with the dynamics observed in non-stressful situations. Such stimulation would answer whether the network is causal, and potentially ameliorate the impact of stressful behaviour. We note that a prior stimulation procedure was used that stimulated the thalamus (Md_thal) in a phase-locked manner to 3–7 Hz waves in the prefrontal cortex ( Carlson et al., 2017 ). This stimulation procedure aligns with the network discovered by our approach, and reduced time immobile in the TST, which is consistent with a reduction in stress.
Differentiating Clock- 19 vs wild type dynamics is a far more difficult task as compared to characterizing stress. This would unsurprisingly indicate that tail suspension and non-stressful situations are more distinctive than the baseline dynamics due to modification of a single gene. Yet also here our SAE approach outperforms prediction using cut or joint models as shown in Table 4 . In this case, adding extra factors to the cut model yields a closer (but still inferior) predictive ability to the SAE model. However, the previous task showed that a strategy relying on increasing latent dimensionality to improve predictions with a single factor will give inconsistent results. Our SAE approach provides a more reliable method for finding a single network correlated with the outcomes of interest in neuroscience.
The network found in each of the splits was consistently negatively correlated with the Clock- 19 genotype and the network found in one of the splits is shown in Figure 8 . That this network is negatively associated with Clock- 19 indicates that this stimulation in Clock- 19 mice would make their dynamics more closely align with the wild type population. This network is characterized by power in the prefrontal cortex at low frequencies similar to the stress network. However, this network is weaker in the hippocampus (IDHip and mDHip). This would indicate that a reasonable location for stimulation is the prefrontal cortex at low frequencies.
Our emphasis in this work has been finding networks that can be used for stimulation, and therefore we have largely refrained from interpreting the scientific conclusions derived from the estimated networks in the TST data set. Nevertheless, it is easy to see the similarities between the estimated stress network and the estimated Clock- 19 network. In fact, the cosine similarity between the networks in Figures 5 and 6 is 0.99, whereas if we pick a random network in the task model and a random network in the genotype model, the average cosine similarity is 0.38. Based on this, our models would suggest that the differences in genotype are similar to the observed differences induced by stressful and non-stressful conditions. It is encouraging to note that this observation has matched previous scientific experimentation ( Murata et al., 2005 ). Additionally, this implies that the stress felt by the Clock- 19 animals is lower than wild type (WT) animals, which aligns with the finding that they are more active during the TST ( Carlson et al., 2017 ). We note that these findings show an appealing aspect of our approach: by using an interpretable and biologically relevant generative model in our SAE, we have been able to scientifically analyse the differences between the experimental groups.
As a final comparison, we can also empirically verify our claim that SAEs predictive ability is robust to the latent dimensionality. Figure 9 shows the predictive ability of our SAE-based approach, a SDL approach, and a ‘cutting-the-feedback’ as a function of latent dimensionality, predicting both genotype and stress condition using the most predictive factor. These results were obtained evaluating the general population rather than a mouse-by-mouse basis to match standard statistical evaluation techniques. We can see on both tasks that the SAE maintains a consistently high predictive ability with all latent dimensionalities. However, the other two methods not only fail to match this predictive ability but also fluctuate rather than providing a clear monotonic trend. This illustrates a substantial problem that has traditionally plagued network-based neuroscience; even simple parameter choices can cause dramatic shifts in predictive results. | Acknowledgments
We would like to thank Michael Hunter Klein and Neil Gallagher for providing helpful feedback and testing the code for the NMF and PCA models.
Funding
Research reported in this publication was supported by the National Institute of Biomedical Imaging and Bioengineering and the National Institute of Mental Health through the National Institutes of Health BRAIN Initiative under Award Number R01EB026937 to D.C. and K.D. and by a W.M. Keck Foundation grant to K.D. D.D. was funded by National Institutes of Health grants R01ES027498 and R01MH118927.
Data availability
The data analysed in this work are freely available at https://doi.org/10.7924/r4q52sj36 under a non-commercial licence.
Appendix
Derivation of an SAE via Lagrangian relaxation
We denote as the measured dynamics (predictors) and as the corresponding trait or outcome. The latent factors are denoted , Θ are the parameters relating the factors to , and Φ are the parameters relating the factors to y . Let us define the predictive feedforward network as , where is the encoder, and the reconstructive autoencoder is a composition of the encoder and a decoder . Then the empirical risk minimization of the predictive loss yields an objective function
Adding a constraint on the reconstructive loss in a manner corresponding to structural risk minimization yields
This constraint on the reconstruction loss is difficult in practice. To circumvent this difficulty, we introduce a Lagrange multiplier μ and the unconstrained version becomes
Since L does not depend on the parameters we then treat μ as a tuning parameter controlling the strength of the supervision loss. This yields the form of Equation ( 3 ).
Derivation of SAE and joint factor model analytic solutions with an loss
We assume that both the predictors and the outcome are demeaned. For convenience, we will define matrix forms of the data and factors: , , and . We let the model parameters and for matrices and . For the associated SAE model, we assume a linear encoder, . Both replace the negative log likelihood with an loss. Rather than penalizing the factors and loadings we placed a normalization constraint on the loadings, analogous to PCA.
Deriving the joint model form
With simplifications, the objective corresponding to Equation ( 1 ) is
We can rewrite the above objective matrix form as
This is equivalent to the trace representation of
We can now take the partial derivatives with respect to W , S , and D using standard techniques with the aim to find the fixed points where each partial derivative vanishes. Denoting the previous loss as F , the first partial derivative with respect to W is
The derivative with respect to D is
And the derivative with respect to S is
These conditions will all be satisfied at the fixed point solution to this problem. We will first solve for a single latent feature, so the matrices W , D , and S are vectors , , and . With that we can rewrite the above condition as
We deonte and . Since is a vector, both α and γ are scalars in this case. Thus, in a single feature case, the fixed point equations can be written as
When we substitute into the previous equations we end up with
Because α and γ are scalars we note that the solution to the equations above must be an eigenvector of
where the first part of the eigenvector corresponds to the solution of and the second part corresponds to the solution of . In practice, we have always found that the solution corresponds to the largest eigenvector. Finding further dimensions can be done iteratively by subtracting out the previous variance explained then repeating the procedure. In practice, we have found these to be equivalent to the L largest eigenvalues of B , which is supported by the fact that when this yields the classic PCA solution.
Deriving the SAE form
We similarly rewrite the objective of an SAE in matrix form as
This is equivalent to the trace representation of
We can now take the partial derivatives with respect to W , A , and D using standard techniques with the aim to find the fixed points where each partial derivative vanishes. Denoting the previous loss as F , the first partial derivative with respect to W is
The derivative with respect to D is
And the derivative with respect to S is
Once again, we will first solve for a single latent feature, so the matrices W , D , and A are vectors , , and . With that we can rewrite the above condition as
We denote and . Since is a vector, both α and γ are scalars in this case. Thus, in a single feature case, the fixed point equations can be written as
When we substitute into the previous equations we end up with
Because α and γ are scalars we note that the solution to the equations above must be an eigenvector of
where the first part of the eigenvector corresponds to the solution of and the second part corresponds to the solution of . In practice, we have always found that the solution corresponds to the largest eigenvector. Finding further dimensions can be done iteratively by subtracting out the previous variance explained then repeating the procedure.
Implementation of NMF models
The NMF models used in this paper were implemented in Tensorflow 2.0, with the code available in a public github repository. The loadings and factors were initialized using a standard NMF implemented in sklearn. The loadings were rotated so that the most predictive factor was the designated supervised factor. With the SAE version, the encoder was initialized as the coefficients of a predictive linear model that predicted the initialized factors using the covariates. This was implemented using elastic net from sklearn. The joint factor model was implemented as batch training while the SAE was learned with stochastic gradient descent. The learning rates were chosen as values that had worked previously.
Synthetic LFP generation
We generated a CSFA-CSM model according to Gallagher et al. (2017) with the code located in a Github repository located at https://github.com/neil-gallagher/CSFA .
We initialized a CSFA-CSM model with 30 latent variables, 8 channels, and 3 spectral Gaussian mixtures using the default method. Using this model 20,000 LFP windows were generated using the default sampling method, with each window containing 1,000 samples over a 1-s interval. The synthetic outcome was generated using the sampled latent factors from a single latent component. These synthetic LFPs were then processed using our standard pipeline used in Section 5 . The CSFA-CSM model, synthetic LFPs and outcome are included with the paper. | CC BY | no | 2024-01-16 23:36:47 | J R Stat Soc Ser C Appl Stat. 2023 May 22; 72(4):912-936 | oa_package/f8/5b/PMC10474874.tar.gz |
|||
PMC10477232 | 37666925 | Introduction
In men, physical and psychological stresses caused by different factors, including resistance and endurance exercises, increase the secretion of cortisol and testosterone, which in turn affect the hypothalamic pituitary axis 1 . These hormones have their own circadian rhythms: the circulating cortisol concentration peaks 30 min after wakeup and then immediately decreases toward the evening, and the circulating testosterone concentration peaks at wakeup and gradually decreases toward the evening 1 , 2 . Overtraining syndrome refers to the hormonal response to increased physical and psychological stresses on athletes caused by excessive overloading during training, which results in reduced performance 3 . Cortisol shows a reduced response to exercise and changes in its circadian rhythm, such as a low resting concentration and peak loss after wakeup 3 , 4 . Testosterone also shows a decrease in its resting concentration 4 . Monitoring hormone secretions and their circadian rhythms can help with preventing and assessing overtraining syndrome. The testosterone-to-cortisol (T/C) ratio has also been shown to decrease with exercise-induced stress 5 . Testosterone and cortisol have their own circadian rhythms. Thus, the individual circadian rhythms of testosterone, cortisol, and the T/C ratio should be considered when monitoring exercise-induced stress among athletes.
Cortisol and testosterone concentrations are commonly assessed by using immunological methods on serum and saliva specimens. Saliva sampling is a simple, stress-free, and noninvasive method that does not require the help of a medical professional, such as for blood sampling 6 . Moreover, cortisol and testosterone concentrations are commonly conjugated to corticosteroid-binding globulin and sex hormone-binding globulin, respectively, in serum. However, both steroid hormones are not conjugated in saliva. Therefore, salivary cortisol and testosterone concentrations have stronger positive correlations with serum free cortisol and testosterone concentrations than with serum total cortisol and testosterone concentrations, respectively 7 , 8 . However, measuring the cortisol and testosterone concentrations to monitor stress caused by different exercise intensities must consider their individual circadian rhythms. This requires sequential sampling during days of exercise and the efficient measurement of a large number of samples. Our previous study showed that automated electrochemiluminescence immunoassay (ECLIA) can accurately assess the salivary cortisol concentrations, and sequential saliva sampling and automated measurement of salivary cortisol can be used to detect the circadian rhythm and compare stresses induced by endurance exercises of different intensities among female long-distance runners at the same time on different days 9 , 10 . Escribano et al. 11 reported that an automated chemiluminescent immunoassay accurately evaluated the salivary testosterone concentrations of growing pigs. However, there have been no studies on similar saliva sampling and automated assessment of the testosterone concentration and T/C ratio among human athletes for assessing exercise stress. If the combined automated assessment of salivary testosterone and cortisol concentrations considering their individual circadian rhythms can adequately identify changes in stress caused by exercises at different intensities, this approach may help with establishing exercise programs that prevent overtraining syndrome. In the current study, our aim was to validate whether the automated ECLIA-based assessment of testosterone and cortisol concentrations and the T/C ratio can assess the stress response of male long-distance runners to exercises of varying intensities accurately and effectively. | Methods
Ethical approval and consent to participate
Written informed consent was obtained from all participants. This study was approved by Gunma University ethics review board for medical research involving human subjects, and registered with University Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR) which meets the criteria of international committee of medical journal editors (UMIN registration number UMIN000051749, UMIN000051750). All measurements were carried out by trained athletes and in accordance with the Declaration of Helsinki.
Participants
We recruited 20 elite Japanese male long-distance runners as participants. Their lifestyle habits (e.g., wakeup time, mealtime, bedtime, and meal contents) were standardized at the same dormitory. The study design is shown in Fig. 1 . For the correlation analyses of the testosterone and cortisol concentrations, saliva and serum samples were collected at 7:00 am before their morning exercise and breakfast. After 28 days, saliva samples were collected sequentially from the participants in the morning and evening for two consecutive days, during which they performed exercises of different types and intensities. This took place during a training period sufficiently removed from races and followed the same procedure as described in previous studies 9 , 10 . Then, we divided the participants into two groups, which were defined as with and without interval training, respectively, in the evening on day 1. The non-interval training (non-IT) group was given the following exercise program. On day 1, they performed walking and light jogging for 60 min in the morning, and they performed light jogging for 60 min in the evening. On day 2, they performed fixed-distance running of 6000–12,000 m according to their individual conditions in the morning, and they performed walking and light jogging for 60 min in the evening. The interval training (IT) group was given the following exercise program. On day 1, they performed fixed-distance running for 12,000 m in the morning, and they performed interval training with seven sets of fast running for 1000 m and light jogging in the evening. On day 2, they performed fixed-distance running for 10,000 m in the morning, and they performed fixed-distance running for 15,000 m in the evening. The participants ran at their own pace during the light jogging, fixed-distance running, and interval training. To load faster running within a short time period, interval training was used for grouping as high-intensity exercise program. The participants were provided sufficient drinking water during training sessions to prevent dehydration 9 , 10 . On the two training days, saliva samples were collected at eight time points: upon waking (5:00 am), before morning exercise (5:30 am), after morning exercise (7:00 am), before breakfast (7:30 am), before lunch (12:00 pm), before evening exercise (16:00 pm), after evening exercise (18:30 pm), and before dinner (19:00 pm). This followed the same schedule as described in previous studies 9 , 10 .
Physical examination and assessment of exercise intensity
The body weight and fat of the participants were assessed by using a bioimpedance instrument (InBody 770; InBody Japan, Tokyo, Japan). The body mass index was calculated as the weight in kilograms divided by the squared height in meters (kg/m 2 ). Participants were interviewed on their use of medications and supplements. The participants wore Fitbit Ionic (Fitbit Inc. Tokyo, Japan), which is a reliable tool for measuring distances and pulse rates 12 , on their wrists the day before saliva collection. The resting pulse rate at wakeup time and the maximum pulse rates during each exercise session were assessed by using the wearable devices. The distance and duration of running during each exercise session were recorded, and the running velocity was calculated as the distance divided by the duration (m/min) following the procedure described in previous studies 9 , 10 . The Borg rating of perceived exertion (RPE) scale 13 was used to assess the athlete's subjective exertion after exercise. The runner's RPE was scored on a scale of 6 to 20 and analyzed as the Borg scale score 9 , 10 .
Sample collection
Saliva samples (targeted at 500 μL) were collected by unstimulated passive drooling using a polypropylene tube (SaliCap, IBL International, Hamburg, Germany). The participants were not allowed to brush their teeth, chew gum, or consume any food or drink except water within the 15 min before sample collection. All saliva samples were immediately stored at − 80 °C until analysis. Blood samples (2 mL) were obtained by puncturing an antecubital vein using a 23-G needle while the participants were sitting. The serum samples were separated via centrifugation (1500 × g ) at 4 °C for 10 min, and they were immediately stored at − 80 °C until analysis 9 , 10 .
Assessment of salivary and serum testosterone and cortisol concentrations
ECLIA was performed to assess the salivary and serum testosterone and cortisol concentrations by using the Elecsys Testosterone II and Elecsys Cortisol II on the Cobas 8000 system (Roche Diagnostics K.K, Tokyo, Japan). The intra- and inter-assay coefficients of variation (CVs) were respectively 4.2% and 5.6% for the salivary testosterone concentration, 1.7% and 1.6% for the serum testosterone concentration, 4.1% and 4.6% for the salivary cortisol concentration, and 1.3% and 3.4% for the serum cortisol concentration. The serum free testosterone concentration was evaluated by using the radioimmunoassay kit (IMMUNOTECH s.r.o., Prague, Czech Republic) at SRL Inc. (Tokyo, Japan). The intra- and inter-assay CVs of the serum free testosterone concentration were 5.9% and 6.5%, respectively. The salivary T/C ratio was calculated as the salivary testosterone concentration divided by the salivary cortisol concentration. The serum total testosterone-to-total cortisol ratio was calculated as the serum total testosterone concentration divided by the serum total cortisol concentration, and the serum free testosterone-to-total cortisol ratio was calculated as the serum free testosterone concentration divided by the serum total cortisol concentration. The rates of change in the salivary testosterone and cortisol concentrations and the salivary T/C ratio caused by exercise were calculated as each concentration and ratio after exercise divided by the corresponding concentration and ratio before exercise (%).
Statistical analysis
The variables did not have a normal distribution. Thus, the measurement results were expressed as the median value and corresponding 25th–75th percentile range rather than mean values with standard deviations. Spearman’s correlation analysis was performed to assess the correlations between the salivary and serum testosterone and cortisol concentrations and the T/C ratio. The Mann–Whitney U test was used to identify statistically significant differences in each variable between the two groups. The Wilcoxon signed-rank test was utilized to identify statistically significant differences in each variable between two different time points. A p value of < 0.05 was considered statistically significant. All statistical analyses were performed by using the program SPSS Statistics version 26.0 (IBM Corp., Armonk, NY, USA). | Results
Correlations between the salivary and serum testosterone and cortisol concentrations and the T/C ratio
Table 1 presents the characteristics of the participants. Table 2 shows the Spearman’s correlation analyses of the salivary and serum testosterone and cortisol concentrations obtained from ECLIA. The salivary testosterone concentration was positively correlated with the serum total testosterone concentration (ρ = 0.702, p < 0.001) and serum free testosterone concentration (ρ = 0.789, p < 0.001). The salivary cortisol concentration was positively correlated with the serum total cortisol concentration (ρ = 0.586, p = 0.007). The salivary T/C ratio was significantly positively correlated with the serum total testosterone-to-total cortisol ratio (ρ = 0.618, p = 0.004) and serum free testosterone-to-total cortisol ratio (ρ = 0.663, p = 0.001). Figure 2 shows the significant positive correlations between the salivary T/C ratio and the serum total testosterone-to-total cortisol ratio (Fig. 2 A), and between the salivary T/C ratio and the serum free testosterone-to-total cortisol ratio (Fig. 2 B).
Running intensity of each exercise program
Figure 1 shows the division of the participants into two groups. Eight runners without the interval training in evening exercise on day 1 were included in non-IT group, and 12 runners with interval training were included in IT group. Five runners in the IT group were excluded because measurable saliva samples could not be obtained owing to low volume (n = 2) and high viscosity (n = 3), which resulted in seven runners with measurable samples. Table 3 shows the characteristics of each group. There were no significant differences in terms of height, weight, body mass index, body fat, and resting pulse rate on days 1 and 2 between the groups. Table 4 presents the running intensity during each exercise program in each group. In the evening exercise on day 1, the IT group who did interval training had a significantly higher running velocity ( p = 0.014), Borg scale score ( p < 0.001), and maximum pulse rate ( p = 0.001) than the non-IT group. In addition, the IT group had a significantly higher running velocity ( p e = 0.043), Borg scale score ( p e = 0.018), and maximum pulse rate ( p e = 0.018) during the evening exercise on day 1 than on day 2. The IT group had a significantly higher Borg scale score ( p day1 = 0.017) on day 1 during the evening exercise than during the morning exercise. The IT group had a significantly higher running velocity ( p = 0.043) during the morning exercise on day 1 than the non-IT group. The IT group had a significantly higher running velocity ( p m = 0.046) during the morning exercise on day 1 than the morning exercise on day 2. The IT group had a significantly longer running distance ( p = 0.009) than the non-IT group during the evening exercise on day 2, but the running velocity, Borg scale score, and maximum pulse rate did not differ significantly. The IT group had a significantly longer running distance ( p day2 = 0.028) during the evening exercise than during the morning exercise on day 2. The non-IT group had a significantly longer running distance ( p day1 = 0.043) during the evening exercise than during the morning exercise on day 1. The non-IT group had a significantly higher running velocity ( p m = 0.043) and Borg scale score ( p m = 0.041) during the morning exercise on day 2 than on day 1.
Changes in the salivary testosterone and cortisol concentrations and the T/C ratio in response to exercise intensity
Figure 3 depicts the changes in the salivary testosterone and cortisol concentrations and the T/C ratio in response to exercise and considering the circadian rhythms of the two groups over two consecutive days. There was no significant difference between the two groups in the testosterone concentration at wakeup on day 1 ( p = 0.336) and day 2 ( p = 0.397). There was also no significant difference between the two groups in the cortisol concentration at wakeup on day 1 ( p = 0.232) and day 2 ( p = 0.867). There was no significant difference between the two groups in the T/C ratio at wakeup on day 1 ( p = 0.463) and day 2 ( p = 1.000). The salivary testosterone concentration gradually decreased from morning to evening, and the salivary cortisol concentration peaked after wakeup (5:30 am) and immediately decreased on both days for the two groups. Furthermore, the T/C ratio reached its minimum after wakeup (5:30 am) and then gradually increased. Changes in the salivary testosterone and cortisol concentrations and T/C ratio during their individual circadian rhythms were detected on both days for the two groups. The salivary testosterone and cortisol concentrations and the T/C ratio did not show significant changes after the morning exercise. However, the salivary testosterone concentration significantly increased after the evening exercise on both days for both groups. The non-IT group did not show significant changes in the salivary cortisol concentration and T/C ratio after the evening exercise on both days. However, the IT group showed a significant increase in the salivary cortisol concentration after the evening exercise on day 1 and a significant decrease on day 2. The IT group also showed a significant decrease in the T/C ratio after the evening exercise on day 1 and a significant increase on day 2.
Rates of change in the salivary testosterone and cortisol concentrations and the T/C ratio
Figure 4 shows the rates of change in the salivary testosterone and cortisol concentrations and the T/C ratio. According to the Wilcoxon signed-rank test, the IT group showed a significantly higher rate of change in the salivary cortisol concentration ( p = 0.018) and a significantly lower rate of change in the T/C ratio ( p = 0.018) during the evening exercise on day 1 than on day 2. On day 1, the IT group showed a significantly lower rate of change in the T/C ratio ( p = 0.028) in the evening than in the morning. On day 2, the IT group showed significantly higher rates of change in the salivary testosterone concentration ( p = 0.018) and T/C ratio ( p = 0.028) and a significantly lower rate of change in the salivary cortisol concentration ( p = 0.028) during the evening exercise than during the morning exercise. According to the Mann–Whitney U test, the IT group had a significantly higher rate of change in the salivary cortisol concentration ( p = 0.014) and a significantly lower rate of change in the T/C ratio ( p = 0.006) than the non-IT group during the evening exercise on day 1. | Discussion
In this study, we investigated whether the automated measurement of the salivary testosterone and cortisol concentrations and the salivary T/C ratio can be used to assess the stress induced by exercise at different intensities among male long-distance runners accurately and effectively while considering circadian rhythms. The salivary testosterone and cortisol concentrations showed positive correlations with their respective serum concentrations. The combination of sequential saliva collection and automated ECLIA measurement was able to detect the circadian rhythms of the testosterone and cortisol concentrations and the T/C ratio, as well as acute changes caused by exercise. However, measurable saliva samples could not be obtained via passive drooling from five participants due to low volume and high viscosity. The IT group showed a significantly higher rate of change in the salivary cortisol concentration and a significantly lower rate of change in the salivary T/C ratio during interval training in the evening on day 1, as well as the multiple intensity indices. Such changes were not observed in the salivary testosterone concentration.
The cotton swab is a convenient method for quickly collecting a sufficient volume of saliva without residue and mucus for assessing stress marker levels 14 . The cortisol concentration in cotton swab samples is a better predictor of the serum total cortisol and free cortisol concentrations than passive drooling among healthy volunteers 7 . In contrast, cotton swab samples have falsely high testosterone concentrations when assessed by immunoassays 15 , 16 . Previous studies have shown that the cross-reactivity between plant hormones and antibodies influence the results of the testosterone immunoassay if a cotton swab is used 15 – 18 . This is why we used passive drooling in the current study. The salivary cortisol concentration measured by automated second-generation ECLIA has a significantly positive correlation with the measurement by liquid chromatography–tandem mass spectrometry (LC–MS/MS) 19 . We previously assessed the exercise-induced stress among female long-distance runners by combining automated measurement of the salivary cortisol concentration with sequential sampling using a cotton swab 9 , 10 . In that study, the salivary cortisol concentration from ECLIA showed a significantly positive correlation with the concentration from an enzyme-linked immunosorbent assay 9 . Moreover, all salivary samples collected by a cotton swab could be measured without some needing to be excluded due to a low dose or high viscosity samples 9 , 10 .
In the present study, we first evaluated the salivary testosterone concentration using ECLIA by comparing them to the serum samples. The salivary testosterone concentration showed a significantly more positive correlation with the serum free testosterone concentration than with the serum total testosterone concentration. The salivary testosterone concentration can be combined with sequential sampling using passive drooling to assess the circadian rhythm among runners. However, the saliva samples of several participants that were collected via passive drooling did not have sufficient volume and were extremely viscous, so they could not be utilized for ECLIA. Physical and mental stresses reduce the flow rate and increase the viscosity of saliva 20 , 21 . Assays including ECLIA require a sufficient sample volume (minimum of 100–200 μL) because of the dead volume required for trouble-free automated sample processing. Automated ECLIA for measuring the salivary cortisol and testosterone concentrations is advantageous because it can measure a large number of samples easily and rapidly. However, its usefulness may be reduced if used in combination with passive drooling owing to its requirement for large sample volumes without residue and mucus.
Previous studies have shown that the serum cortisol concentration acutely increases due to moderate- to high-intensity endurance exercise (i.e., > 60% of the maximal oxygen consumption [VO 2 max ]) 22 , 23 . Sato et al. 24 showed that the serum cortisol and free testosterone concentrations in healthy young men were elevated by two 15-min sessions of submaximal exercise using an electromechanically braked ergometer at ≥ 40% of their peak oxygen uptake (VO 2 peak ) among non-athletes. However, the serum cortisol and free testosterone concentrations were only increased among male endurance runners by exercise at 90% VO 2 peak . Tremblay et al. 25 showed that a running duration of at least 80 min increases testosterone and cortisol concentrations during low-intensity endurance exercise. Resistance exercise acutely increases testosterone secretion, which is an anabolic hormone that is essential for muscular adaptation and muscle growth 1 . In a previous meta-analysis, Hayes et al. 5 revealed that although acute aerobic and resistance exercises consistently increase the salivary testosterone concentration, the acute response of salivary testosterone to power-based exercise has not been fully elucidated. Anderson et al. 26 showed that the serum free testosterone concentration decreases immediately after exhaustive endurance exercise and gradually increases after 24 h or during the recovery process among male endurance athletes.
In the current study, the interval training during the evening on day 1 for the IT group had the highest exercise intensity based on indicators including the running velocity, Borg scale score, and maximum pulse rate. The interval training significantly increased the salivary cortisol concentration and decreased the salivary T/C ratio. The rate of change in the salivary testosterone concentration showed no significant differences for a given exercise between the two groups or between different exercises within the same group. The salivary testosterone concentration increased after the evening exercise on both days for both groups, despite the differences in exercise intensity. The circadian rhythm of serum testosterone is characterized by high concentrations in the morning followed by a gradual decline in the evening, accompanied by a mild rise from 16:00 to 19:00 pm in young men 27 . In our study, the increase in salivary testosterone concentrations after evening exercise at 18:30 pm in all runners, independent of exercise intensity, may have detected this circadian rhythm. Moreover, the change in the salivary testosterone concentration may have been affected by a longer exercise duration. Doan et al. 28 observed similar results in the circadian rhythm during a 36-hole golf competition, in which the salivary testosterone concentration only increased during holes 25–30 in the evening on the competition day compared with the baseline day. In contrast, the salivary cortisol concentration increased and the salivary T/C ratio decreased at almost every hole on the competition day. They concluded that a low T/C ratio was correlated with good golf performance. The T/C ratio is generally an indicator of the anabolic/catabolic balance during skeletal muscle destruction and recovery 29 . In the current study, we observed that a lower salivary T/C ratio might reflect the acute stress response to exercises of different intensities. However, we did not assess differences in the performance of participants or changes in hormone levels during recovery. Thus, further study is need on the salivary testosterone concentration and T/C ratio to investigate their associations with the performance and recovery processes of endurance- and resistance-trained athletes.
Detecting the responses of testosterone and cortisol to exercise in the morning is a challenge owing to their respective circadian rhythms. However, such measurements are easily obtained in the evening 30 . We 9 previously showed that differences in the rate of change in the salivary cortisol concentration caused by exercise at different intensities could be compared at the same time on different days, even in the early morning. This method allowed us to assess the differences in acclimatization and exercise stress between two altitudes 10 . In the current study, the rates of change in the salivary testosterone and cortisol concentrations and the T/C ratio caused by morning exercise showed no differences between each group on both days. This may be because the exercise intensity was not sufficient to elicit a hormonal response in this study compared with the exercises used in our previous studies 9 , 10 . However, we did observe significant differences in the running velocity and maximum pulse rate in the morning. Further studies involving other types of exercises and higher intensities must be conducted to compare the changes in the hormonal response to exercise in the early morning. In contrast, the rates of change in the cortisol concentration and T/C ratio caused by evening exercise showed significant differences between the two days depending on the exercise intensity. This result suggests that automated salivary cortisol assessment to compare exercise-induced stress response in our previous studies is also useful in determining the salivary T/C ratio. Based on its circadian rhythm, the T/C ratio was lowest after wakeup and then gradually increased. We believe this reflects the circadian rhythm of cortisol rather than that of testosterone. The rate of change in the salivary T/C ratio indicated differences in the stress response between each exercise program. The salivary T/C ratio decreased on day 1 and increased on day 2 after the evening exercise for the IT group. We concluded that it was more influenced by the cortisol response than the testosterone response. In their meta-analysis, Hayes et al. 5 obtained similar results showing that the response of the salivary T/C ratio to exercise was due to changes in the salivary cortisol concentration. Because passive drooling may not be sufficient for obtaining the samples required for assessing the T/C ratio, further study should be conducted to evaluate whether cortisol alone can be used to evaluate the exercise-induced stress response. This would be very helpful because cortisol can be measured by using saliva conventionally collected with cotton swabs.
The current study had several limitations. First, the sample size of each group was relatively small. Furthermore, the number of participants in the IT group was reduced because some saliva samples obtained via passive drooling could not be used for the automated measurement. In addition, we were unable to compare the circadian rhythms between sedentary participants without exercise effect and the exercised runners. The reason for this limitation was the focus on standardizing living conditions that prevented the recruitment of large numbers of well-trained runners or non-runners. Moreover, it was not possible to provide a sedentary period because maintaining the condition of the runners was top priority. Second, the exercise programs were not evaluated by using accurate intensity indicators such as VO 2 max . The runner’s exercise conditions could not be tightly controlled. In contrast, we formed two groups with or without high-intensity interval training in which multiple intensity indices were significantly higher, and we consider it important to be able to assess the difference in stress response by salivary cortisol concentration and T/C ratio between higher-intensity interval training and lower-intensity running on different days only in IT group. Further studies are needed to set the sedentary group showing the circadian rhythms without exercise effect, and to use a standardized exercise program with more participants and accurate indicators for both the high- and low-intensity exercise. Third, the time between the post-evening exercise and pre-dinner sampling points was not sufficient. Some participants had high salivary hormone concentrations at the pre-dinner sampling point, which should be when they are lowest particularly for cortisol. More studies should be performed to adjust the collection time according to the training program, such as including a point before bedtime to assess the basal concentrations at night.
In conclusion, automated ECLIA assessment of salivary testosterone and cortisol concentrations is as accurate as an assessment using serum samples. The cortisol concentration and T/C ratio assessed via sequential saliva collection and automated evaluations can adequately reflect differences in endurance exercise intensity on different days performed at the same time. Such an approach may be useful for detecting different stress responses among athletes while considering the circadian rhythm. | In this study, our aim was to validate whether the automated measurement of salivary testosterone and cortisol concentrations and the testosterone-to-cortisol (T/C) ratio, considering their individual circadian rhythms can be used to assess the stress response of male athletes to different exercise intensities accurately and effectively. We measured the salivary testosterone and cortisol concentrations and their respective serum concentrations that were collected from 20 male long-distance runners via passive drooling in the morning and evening for two consecutive days involving different exercise intensities. An electrochemiluminescence immunoassay was performed to evaluate the salivary testosterone and cortisol concentrations. The results showed a positive correlation between the salivary testosterone and cortisol concentrations and their respective serum concentrations. The participants were divided into two groups: with and without interval training. The interval training group showed a significantly higher rate of change in the salivary cortisol concentration and a significantly lower rate of change in the T/C ratio in the evening interval training on day 1 than lower-intensity running on day 2. Our results indicated that the salivary cortisol concentrations and the T/C ratio could distinguish between exercises at different intensities, which may be beneficial for detecting differences in stress responses among athletes.
Subject terms | Acknowledgements
We are grateful to Hidekazu Shimazu, Koji Hase, Mai Murata, Larasati Martha, and Mayumi Nishiyama for providing technical assistance and helpful discussion. This work was supported by the Ministry of Education, Culture, Sports, Science, and Technology of Japan [grant numbers 18K07406 and 23K10581 (K. Tsunekawa)].
Author contributions
K.T. participated in the collection and analysis of data and manuscript writing, reviewing, and editing. Y.S., K.U., Y.Y., R.M., N.S., T.A., A.Y., N.K., and T.K. participated in data collection and analysis. M.M. participated in conception of the study, supervision, and manuscript editing. All authors read and approved the final manuscript.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:00 | Sci Rep. 2023 Sep 4; 13:14532 | oa_package/9d/31/PMC10477232.tar.gz |
||
PMC10477254 | 37666894 | Introduction
Non-alcoholic fatty liver disease (NAFLD) is one of the most common causes of hepatic disorders, defined by the presence of steatosis in ≥ 5% of hepatocytes in the absence of excessive alcohol consumption or other known liver diseases 1 , 2 . NAFLD represents a spectrum of chronic liver diseases, starting with simple steatosis and progressing towards non-alcoholic steatohepatitis (NASH), cirrhosis, and ultimately hepatocellular carcinoma 1 . Recently, it has been found that NAFLD can also increase the risk of extrahepatic cancers, for example, bladder cancer, which has a high prevalence in the elderly population and is associated with NAFLD through insulin resistance 3 . NAFLD, with a global prevalence rate of nearly 25%, has been considered a global public health concern 2 . This prevalence has been reported up to 70% in diabetic and obese patients 2 .
NAFLD, independent of other risk factors, is highly associated with an increased risk of metabolic disorders including cardiovascular disease, type 2 diabetes mellitus (T2DM) and metabolic syndrome (MetS) in obese and non-obese individuals 4 , 5 . Lean NAFLD has also been shown to be a stronger risk factor for the incidence of T2DM compared to obesity without NAFLD or MetS 4 . As recent meta-analysis has publicized that the risk of MetS and T2DM in non-obese NAFLD patients is 5.43 and 4.81 times higher than non-obese counterparts, respectively 6 . Also, 7-year follow-up of obese and non-obese NAFLD patients revealed that the incidence of cardiometabolic complications is the same in both, which indicates the need for close monitoring 7 . Therefore, cardiometabolic comorbidities seems to be independent of obesity 8 , 9 . However, although the prevalence of NAFLD is growing substantially in the non-obese population, it has mostly been studied in obese individuals 10 , 11 .
Although non-obese NAFLD patients have been shown to share clinical outcomes with their obese counterparts 12 , there is not enough evidence on the cardiometabolic status and disease severity in lean NAFLD. Also, the differences in the characteristics of obese and non-obese NAFLD patients remain poorly characterized. This cross-sectional study intends to compare the separate and combined cardiometabolic status in obese and non-obese patients with NAFLD. | Methods and materials
Subjects and study design
From 2019 to 2021, 452 Fibroscan-proven NAFLD patients were enrolled in the present study and their clinical data were collected prospectively. The inclusion criteria for this cross-sectional study were as follows: (1) adults 18–65-year-old; (2) Fibroscan findings confirming fatty liver grade ≥ 2, and (3) willingness to participate in the study. In this study, we excluded the subjects with (1) significant alcohol consumption (> 30 g/day); (2) history of treatment for viral hepatitis; (3) diagnosis renal failure, malignancies, infectious disease, chronic liver disease other than NAFLD; and (4) receiving effective drug treatment for NAFLD and/or bariatric surgery over the past 6 months.
This study was conducted in accordance with the ethical guidelines of the Helsinki Declaration. The written informed consent form was signed and dated by all participants. The study protocol has been approved by the Ethical Committees of National Nutrition and Food Technology Research Institute, Shahid Beheshti University of Medical Sciences.
Clinical and laboratory evaluations
After recording general information about medical history, medications, alcohol consumption, and smoking habits, all patients underwent laboratory testing, physical examination and liver assessment. Height and body weight were measured without shoes and in light clothing by a well-trained nutritionist. BMI was calculated as body weight (kg) divided by height squared (m 2 ). Waist circumference was measured in centimeters at the minimum circumference between the lower rib and the iliac crest to the nearest 0.1 cm using an inextensible metric tape. Blood pressure was measured in seated position, after 15 min rest, by using a standard mercury sphygmomanometer.
Venous blood samples were collected after a 12-h overnight fast to measure serum levels of aspartate aminotransferase (AST), alanine aminotransferase (ALT), gamma-glutamyl transpeptidase (GGT), total cholesterol (TC), triglycerides (TG), high density lipoprotein-cholesterol (HDL-C), low density lipoprotein-cholesterol (LDL-C), fasting blood sugar (FBS) and insulin. All laboratory parameters were assessed by commercial kits (Pars Azmoon, Tehran, Iran) using standard methods.
Enrolled individuals were classified based on BMI as normal weight (< 25 kg/m 2 ), overweight (25–30 kg/m 2 ) or obese (≥ 30 kg/m 2 ) 13 . Participants who met at least three abnormalities of the following metabolic syndrome criteria recommended by National Cholesterol Education Program’s Adult Treatment Panel III (NCEP: ATP III) 14 were categorized as unhealthy metabolic phenotype: (1) abdominal obesity, defined as waist circumference (> 102 cm in men and > 88 cm in women), (2) high systolic/diastolic blood pressure ≥ 130/85 mmHg, and/or the current use of anti-hypertensive medication, (3) FBS ≥ 100 mg/dl and/or current treatment with anti-diabetic medication, (4) low HDL-C concentration (< 40 mg/dl in men and < 50 mg/dl in women), and (5) triglycerides ≥ 150 mg/dl and/or the current use of lipid-lowering drugs.
Liver assessment
NAFLD was diagnosed through FibroScan® (Echosens, Paris, France) equipped with XL probe, after exclusion of heavy alcohol consumption, viral, or other chronic liver disease. This examination was carried out by experienced hepatologist according to manufacturer’s protocol. Accordingly, fibrosis was measured in kilopascals and scored with a 6-grade scale, from normal to cirrhosis and severe fibrosis. Steatosis was reported in decibels per meter (dB/m) and graded from 0 to 3.
Moreover, the following formulas were applied to estimate the fatty liver in the study:
Lipid Accumulation Product (LAP) 15 : :
Fatty Liver Index (FLI) 16 :
Hepatic Steatosis Index (HSI) 17 :
Fatty Liver Score (FLS) 18 :
BMI, Age, ALT, TG score (BAAT) = was calculated as the sum of the following categorical variables 19 :
(All patients had triglycerides < 400 mg/dl).
Statistical analysis
All the statistical analyses were carried out using SPSS software, version 20.0 (SPSS Inc., Chicago, IL, USA). Two-sided P value less than 0.05 was considered statistically significant. Smirnov–Kolmogorov test was used to check the normality of our data. Continuous variables are presented as mean ± standard deviation (SD), and compared by one-way analysis of variance (ANOVA). Categorical variables are presented as frequency and percentage, and compared by Chi-square test.
Both univariate and multivariate logistic regression models were applied to calculate odds ratios (ORs) and 95% confidence interval for occurrence of metabolic syndrome in each category of BMI. In multivariate logistic regression models, we adjusted potential confounding factors, including age, sex, steatosis and fibrosis score.
Ethics approval and consent to participate
The Ethical Committee of Shahid Beheshti University of Medical Sciences approved the study protocol in accordance with the Declaration of Helsinki. All patients signed an informed consent form and the aims and procedures were explained to them. | Results
Clinical and biochemical characteristics according to BMI classification
The clinical and biochemical characteristics of participants stratified by BMI are listed in Table 1 . Among the 452 NAFLD patients (mean age 45 ± 11.88 years; women 47%), 82 were classified as normal weight, 121 as overweight and 249 as obese. The mean age of patients in different BMI classes was not significantly different. Nearly half of the men (49.8%) and more than half of the women (61%) were obese. There was a stepwise elevation of systolic blood pressure, serum levels of ALT, AST, GGT, FBS and LDL-C with the increase in BMI. Although, there were no significant differences among three BMI classes in terms of biochemical parameters, except for FBS and HDL-C. The frequency of patients with diabetes mellitus (DM) and metabolic syndrome (MetS) increased significantly along with BMI (P < 0.001). Systolic and diastolic blood pressure did not differ significantly between the three BMI classes. Interestingly, patients with a BMI of less than 25 showed the highest severity of fibrosis and steatosis. Along with increasing weight and BMI, the severity of fatty liver also increases. All indices including LAP, FIL, HIS, FLS and BAAT were significantly higher in obese patients than patients with BMI less than 30 (P < 0.001).
Association between BMI and risk of metabolic syndrome
The association between the risks of metabolic syndrome in each category of BMI was assessed using logistic regression. As indicated in Table 2 , crude and full adjusted analysis showed that the risk of developing metabolic syndrome increases with increasing BMI. In Model 4, after adjusting all confounders including age, sex, steatosis and fibrosis score, it was shown that the risk of metabolic syndrome in patients with BMI above 30 and patients with BMI 25–30 is 4.85 times and 3.74 times higher than those with BMI less than 25, respectively (P trend < 0.001).
Linear regression analysis (Table 3 ) after adjusting all confounders including age, sex, steatosis and fibrosis score, showed that waist circumference (β = 0.770, P < 0.001) and serum concentration of fasting blood glucose (β = 0.193, P = 0.002) and triglyceride (β = 0.432, P < 0.001) were the determinants of metabolic syndrome occurrence in NAFLD patients.
We calculated risk of metabolic syndrome by increasing BMI removed influence carried by sex, age, and steatosis and fibrosis grade. As shown in Fig. 1 , regardless of gender, age, grade of steatosis, and fibrosis, an increase in BMI is significantly associated with an increased risk of metabolic syndrome. However, ORs in male and patients under 50 years old group were slightly more compared with female and patients upon 50 years old group. This risk also increases with increasing grade of steatosis and fibrosis. | Discussion
This cross-sectional study was performed to elucidate the differences in metabolic features of obese and non-obese NAFLD patients. Our results disclosed that both obese and non-obese NAFLD patients shared several clinical and laboratory characteristics, although MetS was more prevalent among obese participants. In addition, according to the analysis, the risk of metabolic syndrome and the severity of fatty liver increase along with increasing BMI. These results may have significant clinical implications.
Obese patients with metabolic syndrome accounted for the largest percentage (68%) of the present study population. However, NAFLD is also seen in lean people, but it is usually asymptomatic and remains undiagnosed 20 . Previous studies have reported that fasting blood sugar, HbA1C, insulin resistance, and blood pressure are lower in non-obese NAFLD patients 21 – 23 . Contrary to this evidence, in the present study, no significant difference was observed between obese and non-obese patients in terms of metabolic syndrome components. About half (41.5%) of lean NAFLD patients had abdominal obesity, their mean systolic and diastolic blood pressure was higher than normal and they had dyslipidemia. Interestingly, they also showed higher scores of fibrosis and steatosis.
Although in the general population non-obese NAFLD patients have been shown to share clinical outcomes with their obese counterparts, comparisons of the cardiometabolic risk profile of non-obese NAFLD compared to obese NAFLD have yielded conflicting results 12 . In patients with type 2 diabetes, it was shown that the cardiometabolic risk profile of those with non-obese NAFLD was no better than their obese counterparts. Also, interestingly, cardiometabolic disorders in non-obese women with type 2 diabetes compared to obese female patients showed a stronger relationship with NAFLD 12 . On the other hand, serum concentration of residual lipoprotein cholesterol (RLP-C), an indicator of cardiovascular disease, has been shown to have a worse prognosis in non-obese individuals. This index is independently associated with the incidence of NAFLD 24 .
The evidence about morbidity and mortality in obese and non-obese NAFLD patients remains contradictory. According to a meta-analysis conducted in 2018, obesity was associated with a worse long-term prognosis in obese NAFLD patients 25 . Conversely, the 2020 multiethnic study reported that15-year cumulative all-cause mortality in non-obese NAFLD patients (51.7%) was higher than that of obese NAFLD patients (27.2%) and non-NAFLD subjects (20.7%) 26 . Investigation of NAHANS III data also disclosed that lean NAFLD was independently associated with all-cause and cardiovascular disease mortality 27 . Although evaluation of prognosis and long-term consequences of lean NAFLD requires further studies, current findings highlight the importance of early diagnosis and treatment of NAFLD in lean/non-obese population.
The presence of metabolic abnormalities in non-obese NAFLD patients may be due to insulin resistance, indicating NAFLD-related adverse outcomes in these individuals. In the present study, the severity of liver fibrosis and steatosis was significantly higher in non-obese patients than in obese patients. About half of lean NAFLD patients in the present study had excess visceral fat and suffered from abdominal obesity. Pattern of visceral fat distribution, regardless of BMI, is associated with unfavorable metabolic consequences in individuals 28 . Since visceral fat is associated with insulin resistance and dyslipidemia, in these patients abdominal obesity is more important than total body fat 28 , 29 . In agreement with these findings, recent Korean cohort showed that a higher ratio of visceral to subcutaneous fat was associated with an increased risk of fibrosis in NAFLD patients, regardless of their BMI 30 . Slightly higher than average total body fat in Asians, compared to other races, has led to a higher incidence of NAFLD-related metabolic disorders 11 . Visceral obesity may be a possible explanation for these observations, which need be clarified by further studies. Visceral fats cause low-grade inflammatory status and metabolic disorders, including metabolic syndrome and NAFLD, by recruiting pro-inflammatory macrophages 31 , 32 .
One of the strengths of this study was using a Fiberoscan by an expert hepatologist to diagnosis NAFLD and histological examinations. Relatively complete laboratory information allowed a complete comparison of metabolic status between patients. Despite these strengths, the results of this study should be interpreted with caution due to the following limitations: First, due to the cross-sectional nature of the study design, the causality of relationship between fatty liver, metabolic syndrome, and abdominal obesity could not be confirmed. Second, normal-weight patients remain relatively small percentage of the population. Third, it was not possible to assess body composition to accurately determine fat mass. Forth, the inclusion criteria might lead to biased results, because patients with a NAFLD grade1 were not included in the study. Therefore, further large-scale studies with gender, steatosis and fibrosis grade and body composition will be required to better elucidate the pathogenesis and features of NAFLD. | Conclusion
We performed this cross-sectional study to elucidate the metabolic differences of obese and non-obese NAFLD patients. Based on the results of the present study, it can be concluded non-obese NAFLD patients discloses a similar degree of NAFLD histological severity and metabolic abnormalities compared to their obese counterparts. Insulin resistance and abdominal obesity, regardless of BMI, might play a role in the severity of steatosis and fibrosis in patients. | Nonalcoholic fatty liver disease (NAFLD) is closely associated with cardiometabolic abnormalities. This association could be partly influenced by weight, but not entirely. This study aimed to compare the cardiometabolic risk factors between obese and non-obese NAFLD patients, and explored the relationship between adiposity and severity of fatty liver. This cross-sectional study included 452 patients with Fibroscan-proven NAFLD. Anthropometric measurements, metabolic components and hepatic histological features were evaluated. The risk of metabolic syndrome in each body mass index (BMI) category was analyzed using logistic regression. The prevalence of metabolic syndrome was 10.2%, 27.7%, and 62.1% in normal-weight, overweight and obese participants. Regression analysis showed that the risk of metabolic syndrome in overweight and obese NAFLD patients was 3.74 and 4.85 times higher than in patients with normal weight, respectively. Waist circumference (β = 0.770, P < 0.001) and serum concentration of fasting blood glucose (β = 0.193, P = 0.002) and triglyceride (β = 0.432, P < 0.001) were the determinants of metabolic syndrome occurrence in NAFLD patients. Metabolic abnormalities were similar in obese and non-obese NAFLD patients, although, the increase in BMI was associated with an increased risk of metabolic syndrome in patients.
Subject terms | Author contributions
Conceptualization, Z.Y. & D.F.; Formal analysis, Z.Y.; Methodology, D.F. & A.H.; Writing—review & editing, Z.Y. and A.H. All authors read and approved.
Data availability
All data generated or analyzed during this study are included in this published article.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:00 | Sci Rep. 2023 Sep 4; 13:14531 | oa_package/65/4f/PMC10477254.tar.gz |
|
PMC10491610 | 37684257 | Introduction
Patients with schizophrenia (SCZ) are usually associated with multiple domains of cognitive deficits, which are the core symptoms of SCZ 1 . There is clear evidence that cognitive impairments are present at different stages of SCZ, from prodrome to multi-episode stage, and more severe in chronic patients 2 . Longitudinal studies (at least 6-month follow-up) have reported significant associations between cognition and community outcomes in SCZ 3 . Antipsychotic drugs are the main treatment for SCZ and are effective in alleviating positive symptoms, however, they have a minimal effect on cognitive deficits and their treatment remains unsatisfactory 4 , 5 . Therefore, there is a need to further explore novel therapeutic approaches to enhance cognitive functions in patients with SCZ.
The important pathogenic role of chronic inflammation in the disease course of SCZ has been documented, where cytokines are considered to be essential factors related to the onset and progression of cognitive dysfunction in patients 6 . SCZ has been shown to be correlated with dysfunctions or deficits in all components of the immune system: from innate to adaptive immunity, and from humoral to cellular immunity 7 – 10 . Evidence supports that microglial activation and kynurenine pathway-related brain abnormalities are the underlying pathological mechanisms for cognitive decline in SCZ 11 . Some pro-inflammatory components from immune or glial cells induce the form of plasticity that regulates pre- and postsynaptic functions 12 . There are three cyclooxygenases (COX) isoenzymes that catalyze the metabolism of arachidonic acid to PGs 13 . COX-2 is constitutively expressed in the central nervous system (CNS). It not only interacts with neurotransmitters such as serotonin, acetylcholine, and glutamate but also participates in the regulation of the immune system and CSN inflammation through the action of prostaglandins 14 . The COX-2 -PGE2 signal pathway is well known to suppress natural killers, T cells, and type-1 immunity, but promote type-2 immunity 15 . Thus, as part of the immune system, the COX-2 gene may contribute to the balance of different immune cell types in certain diseases 16 and thus serve as a predictive biomarker for the detection and treatment of clinical symptoms.
Recently, repetitive transcranial magnetic stimulation (rTMS) has been reported to be efficacious for alleviating auditory hallucinations and negative symptoms in patients with SCZ 17 – 20 . In particular, a previous meta-analysis supported that high frequency (HF) rTMS targeting the left DLPFC enhanced cognitive functions in SCZ, especially working memory 21 . Meanwhile, rTMS has been reported to induce long-lasting effects on memory in patients with dementing disorders and healthy individuals 22 – 24 . The potential mechanism of rTMS-enhancing cognitive performance in SCZ may be related to the neuronal priming, driving of oscillatory activity, and synaptic neuroplastic changes in directly stimulated DLPFC or other relevant areas by the magnetic fields 25 . Specifically, noninvasive brain stimulation can produce “neuroenhancement” when applied to the brain 26 . However, some clinical trials have revealed that the administration of HF-rTMS has no effect on cognitive performance in patients with SCZ 27 – 29 , suggesting that the effects of HF-rTMS on cognition in SCZ were inconsistent and mixed.
Current understanding of the underlying mechanism of rTMS treatment may be attributed in part to the modulatory of local cortical plasticity and/or remote neural circuitry 30 . Several biological pathway biomarkers associated with synaptic plasticity, particularly metaplasticity have been reported to influence the response to rTMS in neuropsychiatric disorders 31 , 32 . Based on the relationship between COX-2 and cognitive function, we hypothesized that COX-2 rs5275 polymorphism was associated with cognitive improvements after HF-rTMS over left DLPFC for 4 consecutive weeks in SCZ patients. To test this hypothesis, this study was designed to investigate whether cognitive improvements were associated with COX-2 rs5275 polymorphism after controlling for covariates. | Methods
Patients
The study protocol was approved by the Institutional Review Board of Hebei Rongjun Hospital. Each patient provided written, informed consent to participate in this clinical trial.
One hundred and thirty-one SCZ inpatients were recruited in the hospital. Recruited patients should meet the following recruitment criteria: (1) SCZ diagnosed by DSM-IV using the SCID-I/P; (2) between 30 and 70 years; (3) male inpatients; (4) without MECT therapy in the past 24 months; (5) at least 5 years of illness; and (6) taking a stable dose of antipsychotic drugs for more than 6 months. The exclusion criteria included the following: (1) any other psychiatric disorder diagnosed using SCID-I/P; (2) pregnant; (3) substance abuse or drug dependence; (4) history of epilepsy or family history of epilepsy; (5) with risk of suicide or self-harm; (6) switching the type of antipsychotic or changing the dose during rTMS treatment; and (7) comorbid central nervous system disorder by verbally asking the patients whether they had central nervous system disorders, such as brain pathology, severe headache or severe head injury.
Treatment protocol
The samples were from two randomized, blinded, controlled trials. The duration of rTMS treatment was 4 weeks for a total of 20 sessions. A computer program was used to generate a random number list. Patients with SCZ were assigned to either the active 10 Hz rTMS group or the sham group according to the randomized sequence. Researchers, patients, and raters remained blinded to the trial grouping throughout the trial.
Neuronavigation rTMS was administrated with the MagStim Rapid Stimulator as the protocols described in our previous studies 33 , 34 . Each patient was treated on the left DLPFC once a day, five times a week, for a total of four consecutive weeks [23]. Patients in the sham group received the same study procedures as 10 Hz stimulation, except that the sham coil (P/N:3910-00) looked identical to the active coil and patients could not distinguish whether they were assigned to the 10 Hz group or the sham group. The stimulations over left DLPFC occurred at a power of 110% of MT 35 .
Outcomes
The outcome measure was cognitive functions assessed using the repeatable battery for the assessment of neuropsychological status (RBANS) at weeks 0 and 4 after treatment. It was assessed by the nurses who were blinded to the randomization number. RBANS consists of the total score and five index scores: immediate memory, visuospatial/constructional, attention, language, and delayed memory index scores 36 .
Genotyping
Blood was drawn from patients to separate white blood cells and used to extract germline DNA using standard procedures. Rs5257 polymorphism in the COX-2 gene was genotyped using the MALDI-TOF MS platform (Sequenom, CA, USA), following the standard procedure.
Statistical analysis
Intent-to-treat (ITT) analyses were conducted in this study. Missing outcome data after 3 weeks of treatment were filled in using the last observed non-missing data. Differences in clinical characteristics and cognitive functions between real and sham rTMS groups or between genotype groups were performed by ANOVA analysis. The primary hypothesis was tested in the real rTMS group. The impact of rs5257 polymorphism on cognitive functions was examined by the repeated-measures (RM) analysis of covariance (ANCOVA). In the RM-ANOVA model, the within-subject factors included the Time factor (two levels, baseline and week 4) and the genotypic factor (two levels). The primary hypothesis was tested in the interactive effect between Time and Treatment. We were more focused on the interaction effect of Time and genotypic group. If it was significant, the difference between the two genotypes at week 4 was compared by ANCOVA with the baseline scores as covariates. In addition, the improvements in cognitive functions were also compared between the two genotypic groups in patients by using the Wilcoxon signed-rank test. Exploratory regression analysis was adopted to examine the predictive factors for the improvement of neurocognitive functioning.Two-tailed p -value was used and the significance level was set at 0.05.
Human and animal rights
No animals were used in this research. All human research procedures followed were in accordance with the standards set forth in the Declaration of Helsinki principles of 1975, as revised in 2008 ( http://www.wma.net/en/20activities/10ethics/10helsinki/ ). | Results
Baseline demographic and neurocognitive functioning
RM-ANCOVA analysis revealed a significant interaction effect of time and stimulation group on immediate memory ( p < 0.05). Of the initial 131 recruited patients, the patients without COX-2 rs5275 genotype data were removed, and a total of 76 patients were included in the following analysis. Forty-eight patients were in the HF-rTMS group and 28 patients were in the sham group. Genotypic frequencies for COX-2 rs5275 were CC ( n = 8), CT ( n = 24), and TT ( n = 44). Because of the insufficient number of patients with the CC genotype for the following analysis, the CT heterozygote was combined with CC homozygotes to C allele carriers.
Table 1 shows the RBANS total score and its five subscores of SCZ patients among rs5275 genotypes. There were no significant differences in age, education years, hospital times, the dose of antipsychotics, duration of illness, and onset age between C allele carriers and patients with TT homozygotes (all p > 0.05). However, we found a significant difference in the RBANS total score between genotypes ( p = 0.04). Patients with the CC genotype showed higher cognitive performances compared with T carriers (61.0 ± 12.6 vs. 55.0 ± 11.0).
Comparison of stimulation efficacy between different genotypes in the rTMS treatment group
First, the RM-ANCOVA analysis including all patients ( n = 76) revealed a significant interaction effect of the stimulation group (real vs. sham groups) on the immediate memory index of RBANS ( F = 4.0, p = 0.049).
RM-ANCOVA analysis was performed to investigate the improvements in cognitive functions between the genotypes in the real rTMS group. We found significant genotypic group × time interaction effects on immediate memory index, delayed memory index, and RBANS total score in the rTMS treatment group (all p < 0.05) (Table 2 ) (Fig. 1 ). In addition, as shown in Table 2 , the main effects of group on delayed memory index and RBANS total score and main effects of time on immediate memory, attention, delayed memory, and RBANS total score were significant (all p < 0.05). There was no significant interaction effect of genotypic group × time on RBANS scores in the sham group ( F = 0.22, p = 0.64).
Improvements in cognitive functions among genotypes in the HF-rTMS group
Further analyses between the two genotypes in the active group suggested that there were significant differences in the increases in immediate memory, delayed memory, and total RBANS score between the two genotypes ( Z = −2.3, Z = −2.1, Z = −2.0, all p < 0.05). The increases in immediate memory delayed memory, and total RBANS score in C allele carriers were observed to be significantly greater than those with TT homozygotes (all p < 0.05). When the cognitive improvements were used as the dependent variable and rs5275 was used as the independent variables and age, duration of illness, educational levels, and cognitive performance at baseline as covariates, regression analysis revealed that genotype was a significant predictive factor for immediate memory improvement (beta = 0.28, t = 2.28, p = 0.026) ( R 2 = 0.17). | Discussion
We found that rs5275 polymorphism in the COX-2 gene was correlated with cognitive improvements following HF-rTMS treatment for 4 weeks in SCZ, after adjusting for the confounding factors. In addition, Rs5275 C allele carriers showed greater improvements in cognitive functions.
We found that SCZ patients with the TT genotype showed worse cognitive performance compared to C allele carriers. Rs5275 variant located in the 3’UTR of the COX-2 gene has an impact on COX-2 transcriptional activities and expression levels 37 . A growing body of studies suggests that activated proinflammatory cytokines modulate synaptic efficacy and the intrinsic excitability of neurons in the brain and influence cognitive performance, memory, and behavior by immune-triggered neuroplasticity 12 . COX-2 enzyme, inducibly expressed in microglia and astrocytes in response to proinflammatory molecules 38 , 39 , is also involved in the regulation of immune response and cognitive function in patients with SCZ. The frontal cortex and hippocampus, regions of the brain related to cognition and memory in the brain, express COX-2 in postsynaptic dendritic spines and excitatory terminals of cortical and spinal cord neurons and are regulated by synaptic activity 40 , 41 . Animal models of long-term potentiation (LTP) and long-term depression (LTD) revealed a modulatory role of COX-2 in LTP function 42 – 44 , and pharmacological COX-2 inhibition directly attenuated LTP in the CA1 region of the hippocampus 45 , suggesting a fundamental role of COX-2 enzyme in learning and memory 46 .
To our best knowledge, this study reports for the first time an association between COX-2 rs5275 polymorphism and cognitive improvements after 4 weeks of rTMS treatment in SCZ. Since COX-2 is involved in regulating inflammation and immune responses, our findings provide further evidence that inflammation and certain modulations of the immune system are the underlying mechanisms of cognitive decline in SCZ. Interestingly, the COX-2 enzyme was reported to be up-regulated by HF stimulation, similar to the induction of LTP. More importantly, clinical trials have also found that COX-2 inhibitors modulate immune function and show effects on cognition in patients with SCZ 14 , 47 , 48 . In addition to the above modulations, COX-2 is also induced by the trans-synaptic activation of the dopaminergic, serotoninergic, and cholinergic neurotransmitter systems, and its decreased activity may contribute to the development of cognitive decline 49 , 50 . Rs5275 C allele carriers showed greater improvement in cognitive functions after HF-rTMS treatment, whereas patients with the TT genotype showed no significant improvements. The underlying mechanism of better improvements in C allele carriers following HF-rTMS may be due to its modulation of NMDAR-dependent LTP and LTD synaptic plasticity. Upon stimulation, neuronal COX-2 enzymes were activated in response to synaptic excitation to produce the predominant COX-2 metabolite in the neurons, which in turn stimulates the release of neurotransmitters, such as glutamate. The different responses among the genotypes may be due to the rTMS-induced persistent changes in COX-2 production-related signals and the intensity of the LTP/LTD-like effects after stimulation between TT homozygotes and C allele carriers.
It should be noted that in this study, time may have an effect on the cognitive functions of patients with SCZ. As shown in Table 2 for the cognitive scores at baseline and follow-up assessments for both the rs5275 TT and rs5275 C carriers for the active HF-rTMS, the cognitive functions were increased in both the rs5275 TT and rs5275 C carriers. However, the effects of time (i.e., repeated cognitive assessments) were much lower than the effects of group (i.e., rs5275 polymorphism) and the interaction time*genotypic group was significant. Thus, the effect of group (i.e., genotype) was stronger than time in patients receiving HF-rTMS. Given that the C carriers had a better performance at baseline as well compared to the TT homozygotes and the increases in cognitive functions in C carriers were greater than those with TT homozygotes, we speculate that C carriers may function better at baseline, be less affected by this disorder, and therefore achieve greater cognitive improvement after rTMS treatment. Nonetheless, this is only our speculation, and our study does not clearly show whether these differences were driven by rTMS or by group differences in cognitive ability.
Several limitations were noted in the present study. First, it is a pilot study. The small sample size is addressed in the present study, which reduces the statistical power of this study. Second, this study only recruited long-term hospitalized male inpatients on stable antipsychotic medication. Thus, the current findings have limited generalizability in clinical applications and our results cannot be generalized to female patients. Third, only one single nucleotide polymorphism within the COX-2 gene was analyzed. Several other functional polymorphisms have been identified within the COX-2 gene, including rs20417 in the promoter region, rs689466, and rs3218625 in the coding region 37 . It remains unclear whether other polymorphisms can interact with rs5275 to impact the response to rTMS in SCZ. In addition, the interrelationship between the COX-2 gene and other immune-related genes was not measured in the present study. Forth, COX-2 levels in CSF or blood were not measured and analyzed in this study. Therefore, we did not know the impact of the rs5275 C/T variant on its expression or levels. Further studies should analyze COX-2 levels that would lend consistency to all the reported findings in our study.
In conclusion, the rs5275 variant in the COX-2 gene may be involved in the response to neuronavigation HF-rTMS stimulation in the long-term hospitalization of patients with SCZ. The present study provides further evidence for immune-related molecules in the clinical response to rTMS stimulation in SCZ. However, given the limitations stated above and the possible involvement of complex neuroplasticity-related biological pathways not yet studied in this study, additional replications using larger sample sizes are warranted to better understand the potential role of COX-2 in the short and long-term rTMS treatment outcomes. | High frequency (HF)-rTMS has been shown to improve cognitive functions in patients with schizophrenia (SCZ). This study aimed to investigate whether COX-2 rs5275 variants were associated with cognitive improvements following rTMS treatment in patients with SCZ. Forty-eight hospitalized patients with SCZ were assigned to the neuronavigation HF-rTMS group and 28 patients to the sham group over left DLPFC for 1 month. Cognitive function was evaluated using the repeatable battery for the assessment of neuropsychological status (RBANS) at weeks 0 and 4. COX-2 rs5275 polymorphism was genotyped by a technician. At baseline, C allele carriers showed better cognitive performance relative to patients with TT homozygote. Additionally, C allele carriers had greater improvement in memory from the follow-up to baseline following rTMS stimulation, while patients with the TT genotype showed no significant improvement in memory index. More importantly, we found that COX-2 rs5275 was correlated with the response to rTMS after controlling for the covariates. This study data indicate that COX-2 rs5275 was associated with improvements in immediate memory after HF-rTMS treatment in patients with SCZ. rTMS shows an effect on memory only in C allele carriers, but not in those with the TT genotype.
Subject terms | Acknowledgements
This study was supported by grants from the Science and Technology Program of Guangzhou (202206060005, 202201010093, SL2022A03J01489), Guangdong Basic and Applied Basic Research Foundation Outstanding Youth Project (2021B1515020064), Medical Science and Technology Research Foundation of Guangdong (A2023224), the Health Science and Technology Program of Guangzhou (20231A010036), Scientific research project of traditional Chinese medicine of Guangdong (20211306), Guangzhou Municipal Key Discipline in Medicine (2021–2023), Guangzhou High-level Clinical Key Specialty and Guangzhou Research-oriented Hospital.
Author contributions
P.W., X.G., R.S., and F.W. were responsible for clinical data collection. X.G. was responsible for the laboratory experiments. F.W. and M.X. were involved in evolving the ideas and editing the manuscript. P.W. and M.X. were responsible for study design, statistical analysis, and manuscript preparation. All authors have contributed to and approved the final manuscript.
Competing interests
The authors declare no competing interests.
Consent for publication
Written informed consent was obtained from all participants.
Ethics approval and consent to participate
The study was approved by the Institutional Review Board of HeBei Rongjun Hospital (Ethic no. 20070310). | CC BY | no | 2024-01-16 23:35:00 | Schizophrenia (Heidelb). 2023 Sep 8; 9(1):56 | oa_package/22/9e/PMC10491610.tar.gz |
||
PMC10500770 | 37705082 | Introduction
Grapevine Pinot gris virus (GPGV) is a species in the genus Trichovirus , in the family Betaflexiviridae . The GPGV genome is a linear, positive-sense, single-stranded RNA of approximately 7259 nucleotides (nt), excluding the poly (A) tail at the 3′ end [ 1 ]. It has a typical Trichovirus genome with non-coding regions at the 5′ and 3′ ends and three overlapping open reading frames (ORFs): ORF 1 encodes 214 kiloDalton (kDa) virus replicase-associated proteins (1865 aa) including methyltransferase (44–333 aa), helicase (1040–1277 aa) and RNA-dependent RNA polymerase (RdRp) (1447–1797 aa); ORF2 encodes the 42 kDa cell-to-cell movement protein (MP) (367 aa); and ORF3 encodes a 22 kDa viral coat protein (CP) (195 aa) [ 1 ]. Previous studies indicate that GPGV is genetically diverse [ 2 – 8 ].
GPGV was first identified in plants of cv. Pinot gris in vineyards of northern Italy in 2012 [ 1 ] with characteristic grapevine leaf mottling and deformation disease (GLMD). Other symptoms associated with infection also include delayed budburst, shortened shoot internodes and increased berry acidity [ 1 , 9 , 10 ]. GLMD is caused by GPGV, and infection can lead to serious agronomic losses in sensitive grapevine varieties associated with reduced yield and low quality [ 11 – 15 ]. Since it was first described in Pinot gris, GPGV has been found in other grapevine varieties with GLMD and in varieties that are asymptomatic [ 1 , 2 , 6 , 16 ]. Some studies suggest an association between GLMD and specific GPGV strains [ 1 , 4 , 6 , 17 ], and one study showed that virus titer and small interfering RNA accumulation were affected by polymorphisms at the 3'end of the movement protein (MP) gene leading to differences in symptom severity [ 18 ]. Although significant progress has been made in the understanding of the interaction between GPGV and GLMD disease [ 15 , 19 – 21 ], the effects of the GPGV infection on grapevines are still poorly understood, including the relationship between GPGV infection and disease symptoms. Interestingly asymptomatic GPGV infections have been reported in some sensitive varieties that are usually symptomatic, such as Pinot gris and Traminer [ 2 , 22 ], which cast doubt about its association with GLMD.
It was predicted that GPGV originated from Asia, with China being the most probable source of emergence [ 5 ]. Thereafter, GPGV has gradually spread to several grape-producing regions of the world including Europe, USA, Canada, Middle East, and Asia [ 23 ]. GPGV colonizes the vascular tissues of grapevines [ 24 ] and its global spread is likely due to the movement of infected planting material. Transmission by Colomerus vitis , commonly known as grape leaf bud-blister mites, results in the spread of the virus within vineyards [ 6 , 10 , 25 ].
In 2016, GPGV was detected in Australia, New South Wales (NSW), and was subsequently found in Victoria (VIC) and South Australia (SA) [ 23 , 26 ]. It is suspected that GPGV was introduced to Australia via infected propagation material sometime between 2003 when the movement of the virus into Europe was predicted, and 2014, when testing in Australian post-entry quarantine was introduced [ 1 , 23 ]. In Australia, GPGV has been found in a broad range of wine grape, table grape and rootstock varieties, but the characteristic GLMD symptoms caused by GPGV infection have not been reported [ 1 , 23 ]. There has been some recent conjecture that GPGV is associated with a restricted spring growth symptom in Australian table grapes, which includes delayed bud burst, shortened internodes, stunting and zig-zag shoots [ 23 ]. To better understand the potential risk of GPGV in Australian vineyards, molecular methods were used to determine the GPGV diversity in rootstock, table, and wine grape varieties, which showed a range of symptoms or were healthy. | Methods
Sampling of the grapevine samples
During 2017–2021, Agriculture Victoria’s Crop Health Services (CHS) plant diagnostic laboratory received a total of 2171 samples for GPGV testing from different grape-growing regions of Australia, including 1531 samples from South-eastern Australia in the states of VIC, NSW, and SA. The RNA of 191 GPGV positive CHS samples were selected and retained for further analysisand an additional 126 grapevine samples were collected from southeast Australia for this study (n = 317). Where possible, each of the 317 samples were checked for virus-like symptoms. The 317 grapevines included 70 table grapes, 126 wine grapes and 16 rootstocks. There were 105 grapevines for which the type was unknown.
RNA extraction and reverse transcription polymerase chain reaction (RT-PCR)
RNA was extracted from 0.3g tissue (fresh weight) of each grapevine sample using the RNeasy® Plant Mini Kit (Qiagen) and eluted in 30μl RNase-free water, as described by Constable et al . [ 27 ] and quantified using a spectrophotometer (Nanodrop, Thermo Fisher Scientific). Each RNA extract was stored at − 20 °C until use. An RT-PCR assay for the detection of NADH dehydrogenase ND2 subunit ( ndhB gene, NAD) messenger ribonucleic acid (mRNA) by RT-PCR [ 28 ] was used to determine the presence and quality of the extracted RNA.
Each sample was screened using an endpoint RT-PCR assay [ 4 ] and a real-time RT-qPCR assay [ 29 ]. A GoTaq® 1-Step RT-PCR and RT-qPCR System (Promega) was used according to the manufacturer’s instructions except that the total reaction volume was 25 μl and contained 2 μl of RNA template. The 303bp endpoint RT-PCR amplicons were analyzed by electrophoresis in 2% agarose gels that were stained with SYBR® Safe DNA gel stain (Invitrogen) for visualization. The presence of amplicons corresponding to the size of the genome region of interest was observed on a GelDoc Go Gel Imaging System (Bio-Rad).
Metagenomic high-throughput sequencing (HTS) library preparation and sequence reads analysis
Thirty-two GPGV-positive grapevines were randomly selected for metagenomic sequencing (Table 1 ). Five μl of each of the 32 grapevine RNA extracts were used for HTS. The HTS libraries for each sample were prepared using TruSeq® Stranded Total RNA Library Prep Plant with Ribo zero plant kit (Illumina), following the manufacturer’s instructions, and adapters (Perkin Elmer) were used. The size range and concentration of the libraries were determined using the 2200 TapeStation® system (Agilent Technologies) and Qubit® Fluorometer 2.0 (Invitrogen), respectively, and the resulting quantification values were used to pool the libraries. The resulting library was finally sequenced using the NovaSeq 6000 system (Illumina) with a paired read length of 2 × 150 bp.
Bioinformatics analysis
All the raw data was quality filtered, adapters were trimmed, and the generated sequence read pairs were validated using Fastp (version 0.20.0) with default parameters. De novo assembly of the quality-checked paired sequence reads into contigs was carried out using the genome assembler SPAdes (version 3.13.0) [ 30 ]. The resulting de novo assembled contigs were searched [ 31 ] against the NCBI nucleotide database using the alignment search tool BLASTn for the presence of GPGV and other viruses in the grapevine samples. Reference mapping of the de novo assembled contigs for each sample was done using Bowtie2 (version 2.3.4.2) using the most similar genome identified in the previous BLASTn search after which the mapped consensus sequence was viewed in Geneious (version 11.0) to determine the mapped reads coverage and average depth of the genomes generated for each sample.
RT-PCR confirmation of viruses detected by high-throughput sequencing (HTS)
The arrangement of the coding region (equivalent to nt positions 22 to 6812 of the reference isolate NC_015782) of the assembled genomes of two GPGV isolates (5.21 and LT6) that were generated by HTS was confirmed by Sanger sequencing of overlapping amplicons, to assure their quality. The overlapping amplicons were generated by RT-PCR with primer pairs that were designed in this study using Oligo Explorer (version 1.1.2; www.genelink.com/tools/gl-oe.asp ) and three published primer pairs (Additional file 1 : Table S1) [ 4 , 6 , 32 ]. PCR amplification and gel electrophoresis were done as described previously. The amplicons were purified using QIAquick® PCR & Gel Cleanup Kit (Qiagen) and sent to Macrogen (Seoul, Korea) for Sanger sequencing. Each amplicon was sequenced twice in the forward and reverse directions. The resulting sequences for each isolate were used in combination with the contigs assembled from the HTS data to generate consensus genome sequences for each isolate. Four GPGV isolates (5.5, 5.13, 5.24 and LT7) had genomes with low average coverage and depth and in some cases, gaps in the consensus, therefore Sanger sequencing of specific regions were used to complete and/or confirm the genome assembly.
Phylogenetic tree and sequence identity analyses
To establish a relationship between the manifestation of symptoms and specific strains of GPGV, phylogenetic analysis of a 460nt region of the GPGV genome encompassing the 3′ end of the movement protein and the 5′ end of the coat protein ORFs [ 2 , 6 ] of 32 Australian isolates was compared to representative isolates previously described as associated with GLMD symptoms or with asymptomatic infections [ 6 , 17 ]. Multiple alignments were performed using the MEGA X software [ 33 ] with default parameters. Phylogenetic trees were generated using the maximum likelihood (ML) method based on the Tamura-Nei model with 1000 bootstrap replicates.
The consensus genome sequences of the 32 Australian GPGV isolates generated in this study were aligned with 168 GPGV genome sequences available in GenBank (Additional file 1 : Table S2) using MUSCLE alignment software [ 34 ], excluding the viral untranslated regions (UTRs). The genetic distances within the isolate groups were calculated and maximum-likelihood phylogenetic trees were constructed using Kimura’s two parameter model in MEGA X [ 33 ] with default parameters and 1000 bootstrap replicates. The sequence identity analysis was carried out using BioEdit Sequence Alignment Editor [ 35 ] and the Sequence Demarcation Tool (version 1.2) [ 36 ] on the aligned genome sequences. The sequence similarity percentages of the isolates were determined at the nucleotide level for GPGV by MUSCLE alignment implemented in the SDT software (version 1.2).
Based on current taxonomic demarcation criteria recommended by the International Committee on Taxonomy of Viruses (ICTV) for the Betaflexiviridae [ 37 , 38 ], phylogenetic trees for the RdRp and CP region of GPGV were constructed and the sequence similarity percentage was determined at both the nucleotide and amino acid (aa) levels using the methods mentioned above.
Median-joining (MJ) network, population genetics, equilibrium model, neutrality test and fixation index analyses
Variants networks were created using the Median Joining (MJ) algorithm and visualized using the PopART software ( http://popart.otago.ac.nz ), with default settings, for 51 aligned GPGV genome sequences including 32 Australian isolates and 19 overseas isolates that were used in the phylogenetic analysis and had highest nucleotide identity with the Australian isolates.
Further, the aligned genome sequences of the 32 GPGV isolates generated in this study along with 168 overseas GPGV isolates available in GenBank were also used to assess the genetic differentiation parameters such as number of variants (V), variants diversity (Vd), number of polymorphic (segregation) sites (S), the total number of mutations η (Eta), the average number of nucleotide differences (k) and average pairwise nucleotide diversity (π), the total number of synonymous sites (SS), the total number of non-synonymous sites (NS) and the ratio of non-synonymous nucleotide diversity to synonymous nucleotide diversity ( using the DnaSP software (version 6.10.01) [ 39 ].
The Tajima’s D [ 40 ] statistical test of neutrality, included in the DnaSP software, was also used on the dataset with default sliding window parameters, to test the neutral selection hypothesis on the GPGV genomes between populations. This is to determine whether the viral populations are evolving under a non-random process ( D T > 0: balancing selection, sudden population decline); mutation-drift equilibrium ( D T = 0) or a recent selective sweep ( D T < 0: population expansion after a recent bottleneck). The coefficient of F ST (fixation index) is a measure of the average pairwise distances between pairs of individual variants in terms of allele frequencies and was calculated by performing 1000 sequence permutations in DnaSP to estimate the genetic differentiation between populations. The fixation index (F ST ) can range from 0 to 1, where 0 indicates no differentiation between populations and 1 indicates populations are completely isolated and there is no sharing of genetic material or gene flow [ 39 , 41 , 42 ].
Recombination analysis
The aligned genome sequences of virus isolates from this study and the sequences of corresponding GPGV isolates available in GenBank were checked for potential recombination events. When screening for recombination, likely parental isolates of potential recombinants and recombination breakpoints within the genome sequences of GPGV from this study were determined using the RDP program (version 4.9) [ 43 ] with default parameters [ 44 ]. A recombination event was considered to be genuine only if it was detected by four or more of the seven measures [RDP (R), GENECONV (G), BOOTSCAN (B), MAXCHI (M), CHIMAERA (C), SISCAN (S), and 3SEQ (Q)] with p values < 0.05, implemented in the software RDP4.9 [ 45 – 47 ]. Recombination signals were disregarded if they were flagged by RDP4.9 as potentially arising through evolutionary processes other than recombination. | Results
RT-PCR results
GPGV was detected in 473/2171 (21.8%) of the samples tested, including 23/133 samples from NSW, 26/706 samples from SA and 424/1176 samples from VIC. GPGV was not detected in 23 samples from Tasmania or 133 samples from Western Australia. The samples from VIC and NSW were analyzed by region (Fig. 1 ) and the highest proportion of positive samples (415/840; 49.4%) was observed in the Sunraysia horticultural district which encompasses northwestern VIC (350 samples) and southwestern NSW (70 samples).
Using retained RNA of the CHS samples and samples collected specifically for this study (n = 317), GPGV was detected in 113/317 samples, including 37/70 table grape, 59/126 wine grape, 2/16 rootstock and 15/105 unknown varieties based on the presence of the expected 303bp amplified PCR product [ 4 ] as well as Ct values ranging from 16.24 to 35.24 (data not shown) [ 29 ]. The presence of GPGV was confirmed in 81/191 reanalyzed CHS RNA extracts. Most of the GPGV-positive samples (101) were from the Sunraysia region in VIC and surrounding areas, with the remaining samples from the Sunraysia region in NSW (9) and vineyards in SA (3).
Symptoms were not recorded for 14 GPGV-positive grapevines, which were submitted through the CHS plant diagnostic laboratory for virus testing. Of the remaining 99 GPGV-positive grapevines, 6 had GLMD-like symptoms, 31 grapevines had a range of symptoms including restricted spring growth, millerandage and zigzag shoots and 62 were asymptomatic (Fig. 2 ).
Metagenomic high-throughput sequencing (HTS) and bioinformatics analysis
The total raw reads generated by metagenomic HTS from 32 grapevine samples ranged from 81,949 to 30,198,881 reads/sample and these numbers were reduced to 81,714 to 29,489,728 reads/sample after quality trimming. De novo assembly of reads from each sample using SPAdes resulted in 3329–178,375 contigs from which 10 to 2674 contigs matched with known viral sequences. Of those virus-related contigs, 1–12 contigs matched most closely to GPGV across the 32 samples which were confirmed to be GPGV after a BLASTn [ 31 ] search of the GenBank database. The genome size of the various GPGV contigs in the different samples ranged from 6574 to 7449 nt with the average size of contigs per GPGV genome ranging from 1534 to 7426 nt (Additional file 1 : Table S3). The most complete genome sequences for the GPGV strains found in each grapevine sample were used for downstream analysis.
Sanger sequencing of overlapping amplicons generated by RT-PCR confirmed the genome sequence of the two Australian exemplar isolates 5.21 and LT6 and completed the assembly of the GPGV genome for isolates 5.5, 5.13, 5.24 and LT7, which had gaps and low coverage across the assembled genomes. The consensus genome sequences (excluding UTRs) generated in this study for 32 Australian GPGV isolates are available on GenBank (Accession number: OQ198990-OQ199021).
Phylogenetic and sequence identity analysis
The phylogenetic relationships between the 32 GPGV isolates from Australia and representative overseas isolates found on symptomatic and asymptomatic grapevines in previous studies were assessed using a 460nt genomic region encompassing the 3′ end of the MP and 5′ end of the CP ORFs [ 2 , 6 ]. Five Australian GPGV isolates, including one isolate associated with GLMD-like symptoms (9.9), two isolates from grapevines with other symptoms including restricted spring growth, millerandage and zigzag shoots (9.1, 9.13), and two isolates that were from asymptomatic grapevines (LT6, LT7) were in clade C (high occurrence of symptoms), that included isolates from other countries that were associated with GLMD. The remaining 27 Australian isolates, which included three isolates (9.10, 9.5, 9.14) from grapevines with GLMD-like symptoms, 12 isolates from grapevines with other symptoms and 11 isolates from asymptomatic grapevines, were in clade A that included isolates from other countries that were detected in asymptomatic grapevines (Fig. 3 ).
Isolates of all four clades were found in the Sunraysia grape-growing region of VIC (Mildura and surrounding towns, Robinvale) and NSW (Euston) (Figs. 4 a, 5 ). Only clade I and clade III isolates were found in SA, from the Barossa Valley (Angaston) and Adelaide Hills (AH), respectively. The SA clade I isolates, which shared 99.3% nt identity with each other, were from the same Barossa Valley grower, and the SA clade III isolates, which shared 98.8% nt identity with each other, were from the same Adelaide Hills grower.
The nucleotide percentage identity between the Australian GPGV isolates ranged from 96.6 to 99.8% for the RdRp gene and 93.1 to 100% for the CP gene (Additional file 1 : Fig. S1). The overall percentage amino acid identity between the Australian GPGV isolates ranged from 97.7 to 100% for the RdRp protein and 93.9 to 100% for the CP gene (Additional file 1 : Fig. S2).
The genomes (excluding UTRs) of 32 Australian and 168 overseas GPGV isolates shared 69–100% nucleotide identity with each other. Phylogenetic analysis of the genomes demonstrated that Australian GPGV isolates in Australian clades I and II (defined in Fig. 4 a) formed a distinct cluster that was most closely related to a cluster of isolates from Italy, and which also contained the two GPGV isolates from Australian clade III (Fig. 4 b, c). Australian clades I and II share 97.8–99.8% nucleotide identity, the Australian clade III and Italian isolates share 97.7–100% nucleotide identity, and the larger cluster of the three Australian clades and the Italian isolates shared 97.2–100% nucleotide identity. Australian clade III isolates which includes isolates 8.33 and 9.14, and which were detected in asymptomatic Lambrusco and Malbec with GLMD-like symptoms respectively, were most closely related (98.5–99.6% nucleotide identity) to isolates fvg-Is6 (MH087440) and fvg-Is1 (MH087439), which were isolated from asymptomatic grapevines.
Australian clade IV GPGV isolates fell into a second distinct cluster of isolates that were primarily from Europe and also included one isolate from China (Fig. 4 b, c). These isolates shared 96.7–99.7% nucleotide identity with each other. Australian grapevine isolates 9.1 (asymptomatic Ralli seedless table grape) and 9.9 (Malbec with GLMD-like symptoms) from the Sunraysia region (Mildura), were most closely related to isolate fvg-Is7 from Italy (MH087441), which was isolated from a symptomatic grapevine. Isolates LT6 and LT7, from asymptomatic Grüner Veltliner were most closely related (97.8–99.7% nt identity) to two Cabernet sauvignon isolates from Italy (BK011097, BK011099), one Cabernet sauvignon isolate from China (BK011073) of unknown disease status, and one isolate from a symptomatic grapevine cv. Pinot noir in France (KY706085).
Median-joining (MJ) network, population genetics, equilibrium model, neutrality test and fixation index analysis
MJ networks of 32 GPGV isolates from Australia and 19 overseas isolates, which clustered with Australian isolates in the Australian clades I-IV of the phylogenetic tree (Fig. 4 c), showed four distinct variant clusters (Fig. 6 ). The formation of these clusters is supported by the phylogenetic analysis in which four clades were formed that contained the same Australian and overseas isolates (Fig. 4 c) as the related median-joining network clusters (Fig. 6 ). Each MJ network cluster contained hypothetical intermediate variants (represented by black dots) that were often directly linked to only one or a few of the known variants that were analyzed.
Genetic diversity parameters and selective pressure were analyzed for the 32 Australian and 168 overseas GPGV isolates (Table 2 ). The sequences were grouped and analyzed based on clusters formed in the phylogenetic analysis (Fig. 4 c). All four populations share a high level of variant diversity (Vd) which is either 1 or closer to one, indicating high levels of diversity for each cluster. Low nucleotide diversity (π) was observed and ranged between 0.008 and 0.022 within the four MJ network clusters indicating GPGV population expansions (Table 2 ). The population of GPGV isolates in the four clusters had a ratio of < 1 for nonsynonymous nucleotide diversity to synonymous nucleotide diversity (ω) indicating that they have been under a purifying selection (Table 2 ). The neutrality test ( Tajima’s Dt ) produced negative values for each cluster and are indicative of low population differentiation and infer population growth (Table 2 ). The fixation index (F ST ) test statistic, which was used to estimate the degree of genetic divergence between the MJ network clusters, ranged between 0.44 and 0.52 for each pair of clusters that were compared, suggesting infrequent gene flow and high genetic differentiation (Table 3 ).
Recombination analysis
The analysis of 168 global isolates did not identify any international isolates as major or minor parents of 32 Australian GPGV isolates. However, four statistically significant ( p < 0.05) recombinants were were predicted by the analysis (Table 4 ): recombinant 1—9.13 (clade I) with major parent 9.9 (clade IV) with 97.0% similarity and minor parent 5.13 (clade I) with 98.6% similarity; recombinant 2—9.14 (clade III) with major parent 8.33 with 99.1% similarity and minor parent 9.1 (clade IV) with 96.9% similarity; recombinant 3—9.1 (clade IV) with major parent 9.9 (clade IV) and minor parent 8.47 (clade I); recombinant 4—5.13 (clade I) with major parent 2.1 (clade I) with 99.0% similarity (refer to Fig. 4 a). Recombination affected the RdRp gene in three outcomes (isolates 9.1, 9.13, 9.14) and the MP gene in one outcome (isolate 5.13) (Fig. 7 , Table 4 ). All recombinants and parents were located in the Sunraysia region of VIC (Mildura), within approximately a 25km radius. | Discussion
This study provides a snapshot of the prevalence of GPGV in wine, table grape and rootstock varieties in Australia between the years 2017 and 2021. GPGV was detected in grapevines with GLMD-like symptoms as well as in grapevines with other symptoms such as delayed budburst, increased berry acidity, stunted shoots, poor yield, restricted spring growth, zig-zag shoots and millerandage. The virus was also detected in asymptomatic grapevines. GPGV was only found in southeastern Australia, in NSW, SA and VIC and had the highest prevalence in the Sunraysia horticultural region, where table grapes and wine grapes are grown and where some germplasm collections and grapevine nurseries are located.
There is no doubt that infected planting material is responsible for the introduction of GPGV into Australia, and it has been reported that once testing was introduced at the Australian border, 10% of imported grapevines had GPGV [ 23 ]. The close relationship of many Australian isolates to Italian isolates, demonstrated by the phylogenetic analysis, supports the hypothesis that the introduction of GPGV into Australia has a European origin. This is further supported by the MJ variant network analysis, in which European isolates and one Chinese isolate occur with Australian isolates in cluster IV and Italian and Australian isolates occur together in cluster III. It is likely that GPGV was introduced into Australia from Europe after it was introduced into Italy, which is estimated to have occurred in 2003, and before the time testing commenced at the Australian border in 2014 [ 23 , 26 ].
The Australian and overseas GPGV isolates in clades I and II are closely related suggesting that GPGV isolates from both clades may have emerged from one introduction into Australia (Fig. 4 b). However, the MJ network indicates that they are distinct clusters emerging from different hypothetical intermediate variants and therefore it is more likely that phylogenetic clades I and II are associated with separate introductions (Fig. 6 ). Thus, the presence of four distinct Australian phylogenetic clades and four MJ network clusters suggests at least four different introductions of GPGV into the country. The presence of GPGV isolates from Sunraysia in all four clades and MJ network clusters suggests multiple introductions of the virus into this region. Isolates LT6 and LT7 from SA, which are closely related to Sunraysia GPGV isolates found in Malbec (9.9) and Ralli Seedless (9.1), were derived from Gruner veltliner planting material that was imported independently from Europe and was not distributed to or in contact with material from Sunraysia at the time this study was conducted. This indicates a minimum of five introductions of GPGV into Australia. However, it is possible that these four closely related isolates have a common European origin.
RNA viruses are known to have high mutation rates which lead to the production of deleterious mutations that can destabilize the virus population [ 48 , 49 ]. Thus, when RNA viruses with a large population size reach a bottleneck, a purifying selection helps in eliminating these mutants and improves the survival of the population [ 48 , 49 ]. The neutrality test parameter D T is known to test the distribution of nucleotide polymorphisms in the genome [ 40 ]. The negative numbers of D T suggest a recent introduction of this virus in Australia and infer a recent expansion of Australian GPGV populations through new mutations. Virus populations were shown to survive bottleneck selections, which were indicated by negative neutrality test values, and could be caused by a transmission during grafting or transmission by the vector bud-blister mites. The presence of hypothetical intermediate variants in the MJ network clusters is an indicator of diversity within the clusters. Further sampling and sequencing of GPGV isolates is required to determine how the known variants are linked to a specific variant introduced into Australia or if multiple introductions of closely related variants has occurred.
The F ST values of 0.44–0.52 between the four clusters suggest that the clusters are linked. This is likely due to the high relatedness of the Italian isolates that are linked to the five introductions of GPGV into Australia. There is some association between Australian isolates in cluster IV and an isolate from China (BK011073) and therefore an introduction from this region cannot be ruled out. However, Australia has imported most grapevine material from Europe or the Americas and an introduction from Europe seems more likely. It is also possible that the Chinese isolate, which was detected in the variety Cabernet sauvignon, is also a result of the importation of infected grapevine material from Europe into China.
Recombination events have been previously reported in GPGV [ 4 , 5 , 19 ] and this study also predicted recombination amongst the Australian isolates within the RdRp and MP regions. No international isolates were identified as parents which suggests that the evolution of these GPGV isolates occurred after their introduction into Australia (Table 4 ). Based on the phylogenetic, variant and recombination analysis, it is hypothesized that isolate 9.9 (major parent 1) could be an early introduction of GPGV into Australia which led to the generation of a recombinant 9.1 (recombinant 1) by combining with a second introduction of GPGV, 8.47 (minor parent 1). A second recombination event led to the formation of 9.14 (recombinant 2) which was formed by a recombination event between isolate 8.33 (major parent 2) and the recombinant isolate 9.1 (minor parent 2, Fig. 8 ). The prediction of Australian recombinants with Australian parent GPGV strains provides further evidence for the emergence of new variants since the initial introduction into Australia and could indicate that GPGV has been present here for a long time although the year of introduction could not be established.
The results of this study demonstrate that GPGV spread and divergence has occurred in Australia. Infected planting material is likely to have contributed to some spread within and between Australian table and wine grape growing regions. However, GPGV is also transmitted by the grape leaf bud-blister mites which has led to greater GPGV transmission efficiency [ 50 ]. The mite is dispersed through wind, transported through the movement of infested leaf materials, on clothing and equipment, and may be dispersed in cuttings [ 51 ]. The evidence for mite transmission lies in the diversity of Vitis species and varieties that are infected by GPGV, and which are represented in each phylogenetic clade and MJ network cluster. The close relatedness of the GPGV variants found in rootstocks, table grapes and wine grapes for example, isolate CK1 (rootstock), 8.28 (table grape) and 8.29 (wine grape) in phylogenetic clade II, which have > 99% nt identity, supports the hypothesis of localized transmission. A high abundance of blister mites has been observed in the Sunraysia horticultural district of VIC and NSW (data not shown) and would account for some of the spread and high prevalence of GPGV in this region. The lower prevalence of GPGV in other grape-growing regions in Australia might be associated with a lower abundance of the mite vector and possibly fewer introductions of the virus in planting material. There is evidence for the spread of GPGV between regions. For example, GPGV isolates in two wine grapes vines (unknown variety) in Angaston, SA, and an isolate in a table grape var. Ralli seedless from Sunraysia, VIC share > 99% nucleotide identity. The two regions are > 300km apart and this spread could be solely due to the movement of a viruliferous vector, but transmission between regions could also be due to the movement of infected planting material.
GLMD symptoms relating to GPGV infection are known to become visible during the beginning of the grapevine growing season, in spring, but fade later in the season towards veraison, when berries ripen [ 2 ]. It has been reported that there are mild and severe strains of GPGV that affect the severity of symptom expression [ 15 ]. Expression of GLMD symptoms in GPGV-affected vines collected as part of this study was variable with both symptomatic and asymptomatic vines of the same grape cultivar testing positive for the virus. There appeared to be no trend observed between cultivar, time of sampling (seasonality) or growing region and the expression of GLMD symptoms. These results were supported by the phylogenetic analyses conducted in this study. No association could be made between the presence of specific Australian GPGV strains and the presence of any symptom type in Australian grapevines when the same region was compared phylogenetically. These findings are in contrast to previous studies suggesting that two distinct genetic GPGV lineages represent asymptomatic and symptomatic grapevines [ 2 , 6 ]. We observed strains of GPGV normally associated with asymptomatic infections overseas in grapevines with GLMD-like symptoms in Australia, and we have seen GPGV strains that would normally be associated with symptomatic vines in Italy in asymptomatic Australian grapevines. A similar lack of association between GPGV variants and disease has also been reported by others [ 52 – 54 ].
The correlation between symptoms associated with Australian GPGV isolates is further complicated by the presence of other viruses in some of the grapevines that were examined, which could also contribute to the presence of the disease (Additional file 1 : Table S3). Similar results have been noted in other studies that reported the inability to link specific symptoms to the presence of GPGV given the mixed viral infection in grapevines [ 4 , 55 , 56 ].
GLMD-like symptoms, restricted spring growth, zig-zag shoots and millerandage, which were observed in some GPGV-infected grapevines in this study, can be caused by a range of abiotic and biotic factors such as other pathogens, environmental conditions, temperature, nutrient deficiency, soil type and presence of mites. It is possible these factors have contributed to the expression of these symptoms in some Australian grapevines that are infected with GPGV only or in combination with other viruses [ 2 , 22 , 57 – 59 ]. Variable GLMD expression has also been linked to boron deficiency [ 24 ]. Although the availability of boron was not measured in the Australian vineyards where GPGV-infected samples were collected for this study, soils in the Sunraysia region may be alkaline and with low organic matter which can affect boron content and availability and therefore may explain the variable association between GPGV presence and GLMD-like symptoms that were observed [ 60 – 64 ]. | Conclusions
In this study, a combination of the documented history of importation of some infected grapevine varieties into Australia, together with data obtained from surveillance and phylogenetic, MJ network, sequence identity and recombination analyses contributes to an improved understanding of GPGV introduction and spread in Australia. It indicated a minimum of five introductions of GPGV followed by the emergence of new variants. A high level of GPGV distribution in south-eastern Australian vineyards, particularly in Sunraysia, that was observed is likely linked to transmission by a bud-blister mite and distribution of infected planting material as has been observed in Italy [ 2 , 5 , 22 ]. Therefore in Australia, to minimize risk to production when establishing new vineyards, the use of planting material in which GPGV has not been detected is recommended and effective management of the bud mite vectors is required. The absence of a clear correlation between the distinct GPGV strains and the manifestation of symptoms in grapevines makes it necessary to conduct further research aimed at studying the biology of GPGV and its interaction with rootstocks, wine and table grape varieties grown in Australia. | Grapevine Pinot gris virus (GPGV; genus Trichovirus in the family Betaflexiviridae ) was detected in Australia in 2016, but its impact on the production of nursery material and fruit in Australia is still currently unknown. This study investigated the prevalence and genetic diversity of GPGV in Australia. GPGV was detected by reverse transcription-polymerase chain reaction (RT-PCR) in a range of rootstock, table and wine grape varieties from New South Wales, South Australia, and Victoria, with 473/2171 (21.8%) samples found to be infected. Genomes of 32 Australian GPGV isolates were sequenced and many of the isolates shared high nucleotide homology. Phylogenetic and haplotype analyses demonstrated that there were four distinct clades amongst the 32 Australian GPGV isolates and that there were likely to have been at least five separate introductions of the virus into Australia. Recombination and haplotype analysis indicate the emergence of new GPGV strains after introduction into Australia. When compared with 168 overseas GPGV isolates, the analyses suggest that the most likely origin of Australian GPGV isolates is from Europe. There was no correlation between specific GPGV genotypes and symptoms such as leaf mottling, leaf deformation, and shoot stunting, which were observed in some vineyards, and the virus was frequently found in symptomless grapevines.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12985-023-02171-3.
Keywords | Supplementary Information
| Acknowledgements
We thank the growers and Crop Health Services (CHS) for supplying samples and colleagues, including Dr. Ian Dry and Libby Tassie (Tassie Viticulture Consulting), who assisted in sending samples for the present study. Thanks to Chris Bottcher (Agriculture Victoria Research) and Ruvinda N Aluthgama Kankanamalage (Agriculture Victoria Research) for technical assistance, especially at the early stages of the project analysis.
Author contributions
All authors have read and agreed to the published version of the manuscript. Conceptualization, KPK and FC; methodology, KPK, DL and FC; formal analysis, KPK; investigation, KPK; resources, KPK, DL and FC; data curation, KPK, DL and FC; writing—original draft preparation, KPK; writing—review and editing, KPK AR, B.R and FC; visualization, KPK; supervision, FC, AR and BR; project administration, FC.; funding acquisition, FC and BR.
Funding
This research was funded by La Trobe University for financial support through their provision of the La Trobe Full Fee Research Scholarship (LTUFFRS) and La Trobe University Graduate Research Scholarship (LTUGRS) and PhD research scholarship by Wine Australia. Wine Australia supports a competitive wine sector by investing in research, development, and extension (RD&A), growing domestic and international markets, and protecting the reputation of Australian Wine. We would like to acknowledge Agriculture Victoria Research (AVR) for use of facilities that enabled the study to be undertaken.
Availability of data and materials
The genomic data generated and/or analyzed during the current study are available in the open access database GenBank in National Center for Biotechnology Information repository (Accession numbers: OQ198990—OQ199021).
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests. | CC BY | no | 2024-01-16 23:35:05 | Virol J. 2023 Sep 13; 20:211 | oa_package/2e/c5/PMC10500770.tar.gz |
PMC10539721 | 37781318 | Introduction
Other Effective area-based Conservation Measures (OECMs) were introduced in 2010 by the Convention on Biological Diversity (CBD) as areas that achieve long-term and effective in-situ biodiversity conservation outside of protected areas ( CBD, 2010 ). Consequently, OECMs represent a novel conservation approach where conservation outcomes are incidental to existing spatial management practices. In other words, OECMs are identified and recognized rather than specifically designated. The definition, guiding principles, common characteristics and criteria for identifying OECMs were agreed upon by CBD parties in 2018 ( CBD, 2018 ). According to CBD Decision 14/8 ( CBD, 2018 ), key criteria for an area to be identified as an OECM include geographic definition, governance and management, achieving positive and sustained long-term outcomes for biodiversity conservation, including associated ecosystem functions, services and locally relevant values such as cultural, spiritual, and socioeconomic aspects where applicable. Subsequently, additional guidance has been developed by the International Union for the Conservation of Nature (IUCN), the Food and Agriculture Organisation (FAO) and other global organizations to facilitate the identification, recognition and reporting of OECMs 1 ( FAO, 2019 ; FAO, 2022 ; Garcia et al. , 2021 ; ICES, 2021 ; IUCN-WCPA, 2019 ) to contribute to the attainment of Target 14.5 of the 2030 Agenda for Sustainable Development of the United Nations ( UN, 2015 ) and Action Target 3 of the Kunming-Montreal Global Biodiversity Framework ( CBD, 2010 ; CBD, 2022 ). The latter emphasizes the need to conserve at least 30% of terrestrial and marine areas globally by 2030 through ecologically representative, effectively and equitably managed, and well-connected networks of protected areas and OECMs ( CBD, 2022 ).
In recent years there has been increasing research and policy interest in OECMs and numerous countries worldwide have made significant efforts to identify and recognize OECMs to support the implementation of spatially-explicit conservation targets. According to the most recent update of the World Database on Protected Areas (May 2023; WDPA, 2023 ), 671 OECMs have been recognized by only nine countries worldwide (none in Europe).
This protocol aims to establish the methodological approach for a Scoping Review (ScR) with the following objectives:
identify and map the available evidence on assessing the potential of OECMs to contribute to spatial conservation targets,
examine the methodologies employed in research on assessing potential OECMs,
identify the actual spatial contribution of potential OECMs to conservation targets,
provide insights into the evidence-based knowledge about OECMs and information on how potential OECMs contribute to the spatial targets set by CBD. | Methodology
The proposed ScR will follow the methodology outlined by Arksey and O'Malley (2005) , as further developed by Levac et al. (2010) and the Joanna Briggs Institute (JBI) methodology ( Peters et al., 2020 ). The ScR encompass the following nine stages, as recommended by the JBI methodology: 1. Defining and aligning the objectives and questions; 2. Developing and aligning the inclusion criteria with the objectives and questions; 3. Describing the planned approach for evidence searching, selection, data extraction and presentation of the evidence; 4. Conducting the evidence search; 5. Selecting the relevant evidence; 6. Extracting the evidence; 7. Analyzing the evidence; 8. Presenting the results; 9. Summarizing the evidence, drawing conclusions and identifying any implications of the findings ( Peters et al., 2020 ).
The ScR protocol and final review paper will adhere to the Preferred Reporting for Systematic Reviews and Meta-Analyses extension for scoping reviews (PRISMA-ScR) developed by Tricco et al. (2018) . The SUMARI Protocol Template for Scoping Reviews in Word format ( https://sumari.jbi.global/ ) was used to guide the development of this ScR protocol.
Search strategy
The bibliographic search will be conducted in three databases/ platforms, namely: (a) Scopus , (b) Web of Science – Core Collection, and (c) Google Scholar . A combination of keywords will be used in the search, adapted to meet the specific search specifications of each database. The search will be conducted within the title, abstract and keywords of the documents ( Table 2 ). For the Scopus and Web of Science databases, all documents retrieved from the search will be considered for eligibility. In the case of the web-based search using the Google Scholar database, only the first 100 hits will be considered ( Haddaway et al. , 2015 ). Eligible documents will also be sought in other sources such as organizational libraries and websites, preprint archives, documents repositories, reference lists of the included documents from the databases search and documents suggested by topic experts and stakeholders.
Study/source of evidence selection
Following the search, all identified citations will be uploaded to Covidence , a web-based collaboration software platform designed to streamline the production of systematic and other literature reviews. Any duplicate citations will be removed during this stage. The document selection process will be conducted using a team approach, as recommended by Levac et al. (2010) . Twelve independent reviewers will be involved in the selection process. Two reviewers will initially screen each title and abstract of the identified papers against the predefined inclusion criteria ( Table 1 ). Papers that meet the inclusion criteria will proceed to the next stage. The full text of the initially selected documents will be carefully assessed by the reviewers against the inclusion criteria. Any sources that do not meet the inclusion criteria will be excluded from the review. Detailed records will be kept of the reasons for excluding specific sources, and this information will be reported in the final ScR paper. In case of any disagreements between the reviewers during any stage of the selection process, a third independent reviewer will be consulted to resolve the conflicts. The results of the search and the document selection process will be reported comprehensively in the final ScR paper. A flow diagram following the Preferred Reporting Items for Systematic Reviews and Meta-analyses extension for scoping review (PRISMA-ScR) guidelines ( Tricco et al. , 2018 ) will be presented to illustrate the search and selection process.
Data extraction
Data extraction from the documents included in the ScR will be carried out by two independent reviewers using a data extraction tool, i.e., a charting table aligned to the objective and the questions of the ScR (see Extended data ( Petza et al., 2023 )). The data extracted will include specific details related to the participants, concept, context, study methods and key findings relevant to the review objective. To ensure consistency and facilitate collaboration and interaction among reviewers, the data extraction tool will be integrated into the Covidence systematic review management software. This software will help maintain consistency in the extraction process, allow for seamless cooperation between the reviewers, and ensure that the extracted data is consistent and aligned with the objectives and questions of the ScR.
Data analysis and presentation
The evidence synthesized through the ScR will be presented in alignment with the review objective and specific questions at the final review paper. The full set of the raw data that will be collected by this ScR will be available open-access as a supplementary to the final review paper. The data collected will be analyzed by applying descriptive statistics methods. The summarized data will be presented using a combination of graphical and tabular formats, utilizing appropriate software packages and tools (e.g., Miscosoft Excel, Flourish Studio, Datawrapper Plotly etc.). Graphical representations, such as bar charts, line graphs, donut charts, sankey, chord and network diagrams, choropleth maps, word clouds etc., will be used to visually display relevant information and trends identified in the included studies. These visuals can help convey patterns, relationships, and key findings effectively. For example, the number of documents included in the ScR by year of publication will be presented using bar charts. Choropleth maps will be used to present the geographical distribution of the various case studies reviewed. The different types of OECMs will be depicted using word clouds. Sankey diagrams will be constructed to visualize the flow of information between multiple entities (e.g., conservation objective, realm and sector), while network and chord diagrams will be used to depict the connections between the different methodologies applied for the assessment of potential OECMs. In addition to the graphical and tabular presentations, a narrative summary will be included. This summary will provide a coherent and comprehensive description of the findings, explaining how the results align with the review's objective and specific questions. It will offer a synthesis of the key themes, trends, and patterns identified in the included studies. | No competing interests were disclosed.
This scoping review (ScR) protocol aims to establish the methodological approach for identifying and mapping the evidence regarding the actual contribution of Other Effective area-based Conservation Measures (OECMs) to spatial conservation targets. Emphasis will be placed on examining the research conducted, including the methodologies applied. OECMs, introduced by the Convention on Biological Diversity (CBD) in 2010, refer to areas outside of protected areas, such as fisheries restricted areas, archaeological sites, and military areas, that effectively conserve biodiversity in-situ over the long term. OECMs are recognized rather than designated. Many countries currently endeavor to identify, recognize and report OECMs to the CBD for formal acceptance to support the implementation of spatial conservation targets. Studies that assess the contribution of OECMs to spatial conservation targets will be considered. Potential OECMs with primary, secondary or ancillary conservation objectives established by all sectors in the terrestrial, freshwater and marine realm worldwide will be considered. Peer-reviewed and grey literature will be considered without imposing limitations based on publication year, stage, subject area and source type. Both experimental and observational studies in English, French, German, Greek, Italian, and Spanish will be reviewed. The ScR will follow the Joanna Briggs Institute (JBI) methodology. The protocol will be guided by the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) extension for scoping reviews. The search will encompass bibliographic databases such as Scopus, Web of Science and Google Scholar. Grey literature sources will include databases, pre-print archives and organizational websites. The Covidence platform will be utilized for data management and extraction.
Amendments from Version 2
The revised version of the manuscript has one minor change following the Reviewer’s 3 comments. Specifically, the only difference between version 2 (revision 1) and version 3 (revision 2) of the manuscript is the following: Review question: the following 5 th review sub-question has been added. 5. “ What are the main outcomes of the studies that have assessed potential OECMs regarding key findings, effectiveness of potential OECMs, gaps of knowledge and policy recommendations?” | Review question
The overall research question that will guide the ScR is: What is the current knowledge regarding the contribution of OECMs to biodiversity conservation targets? The ScR will aim to address the following sub-questions:
What is the geographical distribution of studies that have assessed potential OECMs and their contribution to biodiversity conservation?
What are the characteristics of the potential OECMs studied in terms of governance type, sector, realm, conservation objectives, and rationale?
What methodologies have been employed to assess the potential of OECMs in contributing to biodiversity conservation?
What is the spatial contribution (percentage of area covered) of the potential OECMs?
What are the main outcomes of the studies that have assessed potential OECMs regarding key findings, effectiveness of potential OECMs, gaps of knowledge and policy recommendations?
Inclusion/exclusion criteria
The inclusion criteria of the ScR, which serve as the basis for determining the sources to be considered for inclusion in the review, will be developed in accordance with the "Participants, Concept and Context (PCC)" mnemonic ( Table 1 ).
Participants
The ScR will consider potential ΟECMs, established by any sector, such as transport, offshore energy, fisheries, aquaculture, maritime, tourism, defense and archaeological heritages. These potential OECMs may have primary, secondary or ancillary conservation objectives and can be governed by different entities, including governments (at various levels), private individuals, organizations or companies, indigenous peoples and/or local communities, as well as shared governance involving multiple rights holders and stakeholders.
Concept
The ScR will focus on the assessment of potential OECMs and how their contribution to spatial conservation targets has been addressed in the existing scientific literature. All studies that assess potential OECMs, along with the various methodologies and metrics applied to evaluate their effectiveness in delivering biodiversity conservation outcomes and contributing to spatial conservation targets will be reviewed.
Context
The ScR will consider studies conducted in the terrestrial, freshwater, and marine realms worldwide.
Types of sources
This ScR will encompass both scientific (e.g., articles, book chapters, letters, editorials, books, data papers) and grey literature (e.g., non-published academic research, theses, policy papers, organizational papers and reports, conference abstracts and papers). Scientific literature will be sourced from online databases and grey literature from pre-print archives, organizational websites, and web-based search engines, and suggestions from topic experts. There will be no restrictions on publication year, publication stage (final or in press), subject area, or source type. All document types will be considered, except for evidence synthesis such as systematic, scoping, rapid, and narrative reviews. To align with language competence of the authors, only studies written in English, French, German, Greek, Italian, and Spanish will be included in the ScR. | Data availability
Underlying data
No data is associated with this article.
Extended data
Open Science Framework (OSF): Assessing the potential of other effective area-based conservation measures for contributing to conservation targets: a global scoping review protocol – PRISMA-ScR Checklist and Data Extraction Tool. https://doi.org/10.17605/OSF.IO/3WK5H ( Petza et al. , 2023 ).
This project contains the following extended data:
Data Extraction Tool.pdf (Data extraction tool of the Scoping Review (ScR))
Reporting guidelines
Open Science Framework (OSF): PRISMA-ScR checklist for ‘Assessing the potential of Other Effective area-based Conservation Measures for contributing to conservation targets: A global scoping review protocol’. https://doi.org/10.17605/OSF.IO/3WK5H ( Petza et al. , 2023 ).
Data are available under the terms of the Creative Commons Zero “No rights reserved” data waiver (CC0 1.0 Public domain dedication). | CC BY | no | 2024-01-16 23:35:05 | Open Res Eur. 2024 Jan 10; 3:118 | oa_package/86/63/PMC10539721.tar.gz |
|||
PMC10587034 | 37733148 | Introduction
Diabetes mellitus is a chronic illness often accompanied by peripheral vascular disease affecting many individuals worldwide. By 2040, it is projected that 642 million people on the planet will have diabetes, about 10% of the world’s population [ 1 •]. The risk of patients with diabetes developing diabetic foot ulcers (DFU) is 15 to 25%, with reoccurrence rates at 5 years increasing dramatically, up to 50 to 70% [ 2 ]. In the USA, billions of dollars are spent in direct costs, for the treatment of diabetes and its complications, and in indirect costs, such as a patient’s inability to work, leading to disability and premature mortality [ 3 ]. About 20% of severe diabetic foot infections result in some type of amputation. Patients with diabetes who have had a foot ulcer are two times more likely to die at ten years compared to patients with diabetes who have never had an ulcer [ 4 ]. There is an increased prevalence of diabetes among individuals from minority groups, and research continues to demonstrate differences in rates of DFU and amputation compared to White individuals [ 5 ••]. | Conclusions
While there continues to be improvement in the management of diabetic foot, there is a dire need to address the racial and ethnic disparities that exist. Future efforts must be made to improve the diversity in patient participation when investigating novel approaches to treat diabetic foot ulcers and other conditions. | Purpose of Review
Diabetes mellitus is a chronic medical condition affecting many individuals worldwide and leads to billions of dollars spent within the healthcare system for its treatment and complications. Complications from diabetes include diabetic foot conditions that can have a devasting impact on quality of life. Diabetic foot ulcers and amputations occur in minority individuals at an increased rate compared to White individuals. This review provides an update examining the racial and ethnic disparities in the management of diabetic foot conditions and the differences in rates of amputation.
Recent Findings
Current research continues to show a disparity as it relates to diabetic foot management. There are novel treatment options for diabetic foot ulcers that are currently being explored. However, there continues to be a lack in racial diversity in new treatment studies conducted in the USA.
Summary
Individuals from racial and ethnic minority groups have diabetes at higher rates compared to White individuals, and are also more likely to develop diabetic foot ulcers and receive amputations. Over the last few years, more efforts have been made to improve health disparities. However, there needs to be an improvement in increasing racial diversity when investigating new therapies for diabetic foot ulcers.
Keywords | Race, Ethnicity, and Disparity
In discussing racial and ethnic disparities in managing diabetic feet, the concepts of race and health disparities must be explored. Race is a social construct and does not have a scientific foundation [ 6 ]. It groups individuals based on physical characteristics like skin color through self-identification and by third-party observation [ 7 ]. Moreover, ethnicity encompasses the cultural norms, values, and behaviors that connect groups based on shared culture and language [ 8 •].
Based on the definition provided by the Centers for Disease Control and Prevention, health disparities “are differences in health outcomes between groups that reflect social inequalities” [ 9 ]. While there is no genetic basis for defining race, race and ethnicity are essential to consider because they significantly influence racial stratification, which plays a role in access to care, treatment, bias, and racial discrimination and racism [ 8 •]. These factors contribute to the health disparities that exist and, thus, the differences in the management of diabetic feet among different racial and ethnic groups.
Racial Disparities in Diabetic Feet Prevalence, Incidence, Risk Factors, and Mortality
Racial and ethnic minorities have a higher rate of diabetes than White people [ 5 ••]. White adults have the lowest prevalence of diabetes; the prevalence is estimated at 7.3% for females and 9.4% for males. For African American adults, the estimated prevalence is 14.7% for females and 13.4% for males. Hispanic adults have a prevalence of 15.1% and 14.1%, and for Asian adults, the rates are 12.8% and 9.9% for males and females, respectively. Native American and Alaska Native adults have the highest prevalence, 14.9% and 15.3% for males and females, respectively [ 10 •]. Therefore, there is a higher incidence of DFUs and amputations in individuals from minority backgrounds [ 11 ].
Diabetic foot osteomyelitis occurs in about 20 to 60% of DFUs [ 12 •]. According to Winkler et al., who conducted a retrospective study with 583 patients who developed diabetic foot osteomyelitis, they found that hindfoot diabetic osteomyelitis increases the risk for a major amputation five fold when compared to forefoot diabetic osteomyelitis [ 12 •].
According to a review completed by Rossboth et al., the risk factors that have been associated with developing diabetic foot diseases across the literature include male gender, poor glycemic control, peripheral neuropathy, retinopathy and nephropathy, insulin use, duration of diabetes, smoking, and height. The authors concluded that glycemic control and smoking are the modifiable risk factors associated with diabetic feet continuously cited in studies [ 13 •].
Racial Disparities in Diabetic Feet Management
Many studies have stressed the importance of using a multidisciplinary team in leading to better outcomes for patients with diabetic foot conditions [ 5 ••, 14 •, 15 ]. Diabetic foot conditions include foot ulcerations, Charcot neuroarthropathy, infection, and osteomyelitis, which require antimicrobial treatment, debridement, the need for hospitalization, revascularization procedures, and, unfortunately, in extreme cases, amputation (Table 1 ). Other techniques used to improve oxygenation and promote healing in diabetic foot conditions include hyperbaric oxygen therapy, far infrared energy, recombinant proteins and growth factors, and using biomaterials like self-assembling peptides [ 16 , 17 •, 18 , 19 •]. Methods have also been investigated to promote the healing of chronic wounds using different types of dressings such as hydrogel dressings, film dressings, foam dressings, hydrocolloid dressings, and alginate dressings that have various levels of efficacy [ 20 ••].
Diabetic foot ulceration is a multifactorial process due to complications of neuropathy (peripheral, autonomic, and motor), vascular disease, immunodeficiency, and uncontrolled blood glucose and occurs in 15% of patients with diabetes. DFUs is often managed with local debridement, nonweightbearing status, and frequent dressing [ 1 •].
Unfortunately, patients with diabetes have hypoimmunity, and diabetic foot ulcers often lead to infection [ 20 ••]. Diabetic foot infections are due to risk factors such as deep ulcers, ulcers existing for more than 30 days, a history of recurrent ulcers, traumatic injuries, and concurrent vascular disease [ 1 •]. Often resulting in hospitalization, these patients must undergo consistent monitoring of lab values (e.g., CBC, ESR, CRP, renal and hepatic function). Based on the severity of the infection, the steps taken can vary. Superficial infections involve surgical debridement, moist dressing, and nonweightbearing status with antibiotics until resolution. Treatment for moderate infections requires an escalation in management, prompting immediate hospitalization plus the protocols for superficial infections. Severe infections include all the treatment modalities, with early surgical intervention being pivotal [ 1 •].
Osteomyelitis develops when a diabetic foot infection has progressed to involve the periosteum, cortex, and medullary bone. This requires hospitalization and extensive monitoring with resection of infected bones or proximal-level amputation with antibiotic therapy [ 21 •].
A lack of access to care for racial and ethnic minorities influences the increase in prevalence of ulcer diagnosis at a more severe stage and the risk of hospitalization for diabetic foot ulcer. When controlling for DFU incidence, Black and Hispanic individuals are less likely to receive revascularization procedures, benefit from limb preservation efforts, and more likely to receive amputations than White individuals [ 9 ].
Diabetic foot infection is a serious complication and accounts for half of all cases of lower limb amputations [ 22 ]. Therefore, ongoing research is being completed to advance the treatments for diabetic foot conditions. Mahdipour et al. attempted to analyze the roles of recombinant proteins and growth factors in managing diabetic foot ulcers to evaluate their efficacy. The authors concluded that the epidermal growth factor showed the most use in improving the healing of diabetic foot ulcers but also asserted that the studies examined contained methodological flaws, making it difficult to ascertain clear conclusions [ 23 •].
Given the devasting effects diabetic foot conditions have on both the quality of life of patients and the healthcare system, there is a crucial push to develop and explore innovative treatments to improve the outcomes of these morbid conditions. Despite the emphasis in recent years to address racial and ethnic disparities, there continues to be a gap in research studies to create more racially diverse research cohorts, and new treatments for diabetic foot conditions are no exception [ 24 ]. A study conducted by Athonvaragnkul et al. investigated the role of far infrared energy in improving peripheral circulation with a cohort of 32 participants, with only five being non-White [ 17 •]. While this is one of the few studies conducted in the USA that explore new therapies, there are distinct social and economic factors to consider based on the history of this country [ 25 ••]. Therefore, it is vital to acknowledge the groups that are disproportionately affected by more severe presentations of diabetic foot conditions and make efforts to include those individuals in novel studies. It is imperative to emphasize that there are no clear guidelines on using race, ethnicity, and ancestry [ 26 ••]. Including these groups is not to compare the biological or genetic differences related to new treatment but rather to be more mindful of the impact that race and ethnicity have on access to resources and treatment.
Racial Disparities in Amputation Rates
A lower extremity is amputated due to diabetes every 20 s [ 27 •]. Amputations of the lower limb negatively affect a patient’s quality of life and are a tremendous financial burden, including costs for rehabilitation, prosthetic creation, management, and maintenance. While this can be difficult for any patient, it is more devasting for individuals from vulnerable populations as it significantly changes the psychosocial, functional, and economic aspects of their lives [ 28 ]. Lower extremity amputations (LEA) occur at higher rates in racial and ethnic minority groups, for people with low socioeconomic status, and in geographically vulnerable areas like rural areas without access to specialist care [ 5 ••].
As it relates to amputations secondary to diabetic complications, it is well documented that health disparities exist among individuals in racial and ethnic minority groups. Diabetic-related amputations are two to four times more likely to occur in Black, Native American, Hispanic, and other ethnic minority groups than in non-Hispanic White patients [ 5 ••]. They are more likely to have complications and a lower survival rate [ 25 ••]. Studies have been conducted to analyze the differences in rates and levels of amputation (Table 2 ).
Lavery et al. studied 8169 hospitalizations for LEA in African Americans, Hispanics, and non-Hispanic White, finding that more amputations occurred in African American people (61.6%) and Hispanic people (82.7%) than in non-Hispanic White people (56.8%) ( p < 0.001). The authors also stated that more proximal amputations were seen in African Americans compared to Hispanic and non-Hispanic White people ( p < 0.001) [ 29 ]. It is important to note that more proximal amputations require a higher demand on the cardiovascular and pulmonary systems for prosthetic gait [ 30 •]. There is also a strong correlation between the level of amputation and prosthesis use. Individuals with below-knee amputations are more likely to ambulate with or without a prosthesis than those with above-knee amputations [ 31 •]. The capacity to regain the skill of walking is vital to improving a person’s quality of life, such as their ability to participate in social activities and prevent metabolic bone disease due to immobility [ 30 •].
The disparities among African American individuals compared to White individuals were consistent with the results concluded by Resnick et al., analyzing 14,407 subjects in the National Health and Nutrition Examination Survey Epidemiologic Follow-up Study. Black participants with diabetes were more likely to receive LEAs than White participants but the difference is only significant for the incidence of diabetes mellitus (3.4% vs. 1.4%, p = 0.02) as opposed to the prevalence of diabetes mellitus, which was not statistically significant. The study also showed that the locations of LEAs were distributed similarly between White and Black participants [ 32 ].
Young et al. documented 3289 individuals had an LEA within the U.S. Veterans Affairs (VA) Health Care System during 1998. Of the patients included in the study, Native American participants (RR 1.74, 95% CI 1.39–2.18), followed by Black (RR 1.41, 95% CI 1.34–1.48), and Hispanic participants (RR 1.28, 95% CI 1.20–1.38), had the highest risk of LEA compared with White participants. Regarding the level of amputation, it was noted that Asian participants were more likely to receive toe amputations than White and other participants; Native American participants were more likely to receive BKAs [ 33 ].
The results of a study completed by Feinglass et al. showed that 65.9% of African American patients with diabetes (OR 1.69, 95% CI 1.11–2.56) compared to 53.1% White or other race patients underwent primary amputations. The authors concluded that because African American people without diabetes have greater racial and ethnic differences in amputation levels than those among patients with diabetes, it demonstrates that both diabetic and non-diabetic African American individuals who undergo amputation are at equally increased risk for primary and repeat amputation and supports the conclusion that diabetes prevalence does not drive racial differences [ 34 ].
Tan et al. reviewed 92,929 Medicare beneficiaries with DFUs and/ or DFIs. Black (HR 1.9, 95% CI 1.7–2.2) and Native American (HR 1.8, 95% CI 1.3–2.6) beneficiaries were at an increased risk of major amputation compared to White beneficiaries. The authors discovered that beneficiaries diagnosed by a podiatrist or primary care physician or at an outpatient visit were less likely to receive a major LEA [ 35 •].
From 2009 to 2017, there were 112,713 patients with diabetes-related discharges in the USA had a minor or major LEA. Black (OR 1.12, 95% CI 1.08–1.16), Hispanic (OR 1.24, 95% CI 1.19–1.29), and Native American (OR 1.32, 95% CI 1.16–1.50) patients were more likely to experience a major amputation compared with White. Native Americans were more likely to have a minor (toe and foot) amputation compared with White (OR 1.26; 95% CI 1.167–1.36) [ 36 •].
Miller et al. evaluated 68,633 Medicare fee-for-service beneficiaries with a DFU who experienced an LEA within a 5-year study time and discovered that there was an increase in the proportion of Black/African American individuals who received an LEA after DFU; they are more likely to have an amputation within the first year after a DFU compared to non-Hispanic White individuals (OR 2.18, 95% CI 2.13–2.23) [ 25 ••].
In summary, there continues to be differences in amputation rates between racial and ethnic minority groups. Non-White individuals are more likely to receive more proximal or aggressive amputations than White individuals, regardless of diabetes mellitus status. African American patients without diabetes mellitus continue to be at an increased risk of primary and subsequent amputations compared to White individuals. These findings highlight the increased amputation rates among African American and other non-White racial groups compared to White individuals, even after excluding diabetes prevalence.
Future Directions
As with any medical condition, it is important that diabetic foot conditions are managed using a multidisciplinary approach [ 20 ••]. Recently, qualitative studies have been conducted to develop a deeper understanding as to why the disparities in diabetic foot conditions exist, and what the barriers are to obtaining proper care among minority groups. This is a multi-level issue that requires intervention at various points of entry within the healthcare system. Efforts should be made to improve health literacy, relationships with providers, and access to quality and effective medical care and services [ 37 ••]. | Declarations
Conflict of Interest
Elizabeth O. Clayton, Confidence Njoku-Austin, Devon M. Scott, Jarrett D. Cain, and MaCalus V. Hogan declare that they have no conflict of interest.
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors. | CC BY | no | 2024-01-16 23:34:58 | Curr Rev Musculoskelet Med. 2023 Sep 21; 16(11):550-556 | oa_package/dc/77/PMC10587034.tar.gz |
|||
PMC10626779 | 37932771 | Background
The importance of gene-environment interactions in lung disease is well recognized [ 1 ], and given that the lung is a major port of entry for xenobiotics and pathogens into the systemic circulation, its barrier function at the alveolar septa is of critical importance. As with other organs, the lung ages with time, and this process is associated with structural and functional changes [ 2 ] to ultimately limit organ functions. The changes are governed by the complex interplay of diverse cellular components, and in the seminal review of Franks et al. [ 3 ], the approximately 40 different cells of the lung have been classified by their morphology and function. Correspondingly, resident cells of the respiratory tract are divided into airway epithelium, alveoli, salivary gland of the bronchi, interstitial connective tissue, blood vessel and cells of hematopoietic and lymphoid origin, the pleura as well as poorly defined cells including progenitor cells (Fig. 1 ).
In terms of quantity, alveolar cells are by far the largest group of cells, and are classified as type I (AT1) and type II pneumocytes (AT2) which differ in their functions. Estimates suggest 480 million epithelial cells in the alveoli of adult human lung [ 4 ] which strikingly amounts to a surface area of about 70 m 2 [ 5 ]. AT1 cells cover > 95% of the alveolar surface area and serve as barrier in gas exchange while AT2 cells produce surfactant and therefore maintain the surface tension of the alveoli [ 6 ]. Lamellar bodies hallmark AT2 cells and typically are seen as single cells among layers of AT1 cells. In addition, AT2 cells can differentiate into AT1 ones, especially during injury and regeneration and are classified as progenitor cells [ 7 ].
The airway epithelium is composed of various epithelial cells with distinct morphological features [ 8 ], and the immune responses by airway epithelia have been the subject of a review [ 9 ]. Usually the cells localize along the respiratory tree, that is from the nasal cavity to the bronchi in form of pseudostratified columnar ciliated epithelium as well as simple columnar and cuboidal epithelium of bronchioles [ 10 ]. The basal cells are specialized airway epithelium and retain the ability to differentiate into distinct cell types, such as ciliated and club cells and function in lung homeostasis, regeneration and tissue repair [ 11 ]. Intermingled are goblet cells which stem from basal cells and secrete mucus for moisture and to trap pathogens or particulate matter for destruction and removal [ 12 ].
A distinct group of cells of the respiratory tract functions in blood vessels [ 13 ]. Here, the intima is composed of a tiny layer of endothelial cells with complex functions in vascular biology and immune response [ 14 ]. Expectedly, the surface area of alveoli is covered by about 70% of blood capillaries.
Given that the lung serves as a port of entry for environmental pathogens and toxins, a wide range of innate and adaptive immune cells can be found and this includes alveolar macrophages, dendritic cells, circulating monocytes, T and B lymphocytes and granulocytes [ 15 ]. Among the immune cells, alveolar macrophages are the first line of defense in the clearance of particulate matter and pathogens [ 16 ], and they can be found in the pulmonary interstitium where they reside in the parenchyma between the microvascular endothelium as well as alveolar macrophages, which are in close contact with type I and type II pneumocytes. Furthermore, monocytes can differentiate into polarized macrophages, based on their response to different cytokines [ 17 ] and play decisive roles in the immune response [ 18 ].
Additionally, the interstitial connective tissue of the lung provides structural support and is composed of extracellular matrix (ECM) proteins which are cross-linked. Of note, interstitial lung disease (ILD) refers to the scaring of connective tissue and comprises a range of conditions including chronic obstructive pulmonary disease (COPD) and idiopathic pulmonary fibrosis (IPF) [ 19 ]. ECM components which are produced by fibroblasts such as type III collagen, elastin and proteoglycans become activated during wound repair. Their accumulation contributes to stiffness, i.e. a key feature of ILD [ 20 ].
Further, the pleura is layered by a simple squamous epithelium (mesothelium), and mesothelial cells play a vital role in lung development and immune response [ 21 ]. Finally, the poorly defined cells subsume multiple stem/progenitor populations in different regions of the adult lung [ 22 ].
Importantly, with the advent of single cell genomics, it is possible to study genomes of distinct cell types of the lung, for example, surfactant producing AT2 cells [ 23 ] or mucin producing bronchial epithelial cells [ 24 ]. Correspondingly, single cell transcriptomes enabled the tracing of specific transcriptomes of a particular cell type to environmental changes and exposures to toxin and pathogens [ 25 ].
So far only a few studies investigated the complex age-related changes of the pulmonary genome [ 5 , 26 – 30 ], and there is unmet need to better comprehend the molecular events associated with the aging of the lung and its immune responses. Based on genomic data, we aimed at identifying genomic responses linked to the aging lung by considering whole lung and single cell transcriptomic data. We were particularly interested in an understanding of the age-related changes associated with stiffness and immune responses and investigated cellular senescence and surfactant biology. Finally, we compared genomic responses of the aging mouse and human lung and considered genomic variations among resident cells of the respiratory tract. | Methods
Depicted in Fig. 2 is the workflow and data analysis for the mouse and human pulmonary genomes and single cell RNAseq.
Mouse lung genomic data
We retrieved data from the Gene Expression Omnibus (GEO) database ( https://www.ncbi.nlm.nih.gov/geo/ ). A total of 89 individual data sets from 15 studies (see supplementary Table S1 & supplementary Figure S1 ) were selected based on the following criteria: Healthy mice of the C57BL/6 strain, comparable age and sex and identical experimental platform, i.e. Affymetrix Mouse Genome 430 2.0. We excluded treatment related genomic data or results from genetic animal models and data sets where the correct age and sex could not be determined. Shown in supplementary Figure S1 is the gender distribution over age. All 89 genomic data sets were used to compute the linear regression model and other statistical testing (see below).
Mouse single cell genomic data
We considered single cell RNA-sequencing data to assign age-related gene expression changes to specific cell types of the lung. For this purpose we queried single cell transcriptome RNAseq counts data from GSE124872 [ 5 ] in addition to data deposited in four different databases, i.e. CellMarker ( http://biocc.hrbmu.edu.cn/CellMarker/ ) [ 178 ], Mouse Cell Atlas ( http://bis.zju.edu.cn/MCA/index.html ) [ 179 ], LungGENS ( https://research.cchmc.org/pbge/lunggens/mainportal.html ) [ 180 ] and Lung Aging Atlas ( http://146.107.176.18:3838/MLAA_backup/ ) [ 5 ].
Human lung genomic data
We retrieved genomic data from the Cancer Genome Atlas (TCGA) ( https://portal.gdc.cancer.gov/ ) and GEO public repository. Furthermore, we collected single cell genomic data from the CellMarker and the LungGENS database in addition to data from a research article [ 278 ]. Next, we divided the data into test and validation sets and provide information on the patient characteristics in supplementary Table S2 . Note, the study cohorts are balanced for gender.
Human lung genomic test set data
We retrieved RNA-Seq data of normal human lung from TCGA. We selected 107 individual data sets by the following criteria: “Primary Site” as “bronchus and lung”, “Sample type” as “solid tissue normal” defined by histology, “age at diagnosis”, i.e. 42–50 years (N = 5), 51–70 years ( N = 62) and 71–86 years ( N = 40).
Human lung genomic validation data
First, we retrieved 40 data sets derived of histologically normal lung tissue from 23 human donors (GSE1643). After removal of duplicates ( N = 4) and specific location, i.e. upper lobe ( N = 13) a total of 23 lower lobe lung samples were considered for further analysis.
Second, we considered the genomic data of 284 individuals who underwent lobectomy for lung adenocarcinoma and considered the genomic data of non-involved (apparently normal) lung parenchyma of this study (GSE71181). We selected three age groups, i.e. 21–50 years ( N = 27), 51–70 years ( N = 191) and 71–85 years ( N = 89). Together, the human test and validation set consisted of 414 human lung tissue samples.
Human single cell genomic data
We retrieved single cell RNA-seq data to assess age-related gene expression changes of specific cell types of the human lung. For this purpose, three databases were queried, i.e. CellMarker, LungGENS and data given in [ 278 ]. Subsequently the data were processed as detailed below.
Data normalization
We performed computations in the R language (version 3.6.3) and applied the Robust Multi-array Average (RMA) algorithm (“affy” package, version 1.66.0) for quantile normalization and background correction of the mouse genomic data. Likewise, we used the “Seurat” package (version 4.3.0) to normalize single cell RNAseq data as CPM. In the case of the human test set data, we selected the “DESeq2” package (version 1.26.0) to normalize RNA-seq counts. The human validation set data consisted of microarray transcriptomic data which have been processed by the “GC Robust Multi-array Average (GCRMA)” method (GSE1643) and the “robust spline normalization” and “ComBat batch adjustment” algorithm (GSE71181). We only considered genes with a signal intensities of > 75.
Differentially Expressed Genes (DEGs)
In order to define age-dependent gene expression changes and to compare mouse and human lung genomes, we analyzed the RMA normalized mouse and human validation transcriptomic data by a liener regression model and by computing the Linear Models for Microarray data (LIMMA, version 3.42.2). In the case of LIMMA, we selected DEGs based on the following criteria: Signal intensity > 75, Benjamini-Hochberg (BH) -adjusted p -value < 0.05 [ 279 ] and a fold change (FC) ≥ 1.5-fold. For the human test set data, we selected the package “DESeq2”, and DEGs were considered statistically significant based on BH-adjusted p -value < 0.05 and FC ≥ 1.5-fold. Statistical testing of single cell transcriptome data involved the “Wilcoxon rank-sum” test (within the “Seurat” package), and we considered significant DEGs based on BH-adjusted p -value < 0.05 and a FC ≥ 1.5-fold.
Apart from identifying significantly regulated genes by the linear regression model and LIMMA, we also applied the hypergeometric test to measure statistical significance. We observed good agreement between the methods. For instance, for down-regulated genes the range is 90%-98% when the results from LIMMA and the hypergeometric test were compared (supplementary Figure S 9 ).
Furthermore, we constructed heatmaps by utilizing the function “pheatmap” (version 1.0.12) in R and applied the complete clustering method. We used the Z-score to plot the heatmap.
Gene enrichment analysis
Gene ontology enrichment analysis
We employed the software: Metascape [ 280 ], DAVID [ 281 ] and the g:Profiler resource [ 282 ] to identify enriched Gene Ontology (GO) terms [ 283 ]. Only terms with a p -value < 0.05 and a BH-adjusted p -value < 0.05 were considered. Subsequently, we determined the consensus of enriched terms among the different software to define commonalities between them.
Gene set enrichment analysis (GSEA)
We computed the gene set enrichment analysis (GSEA) [ 31 ] with the “clusterProfiler” tool in R (version 3.14.3) [ 284 ]. For the mouse and human genomic data, we retrieved information deposited in the “Gene Set Knowledgebase (GSKB)” (version 1.18.0) [ 285 ] and the Molecular Signature Database ( http://www.gsea-msigdb.org/gsea/msigdb/index.jsp ) [ 31 ]. We normalized the enrichment scores (ES) based on random sampling of ES of the same gene set size and applied an absolute “normalized enrichment score” (NES) > 1.5 and a p -value < 0.05 for further analysis.
Single-sample gene set enrichment analysis (ssGSEA)
This procedure is an extension of the GSEA method and is computed in R with the tool “Gene Set Variation Analysis (GSVA)” (version 1.34.0) [ 286 ]. The results are presented as violin plots separated by the different age groups.
Venn diagram analysis
Venn diagrams [287] were drawn to show the relationship between different DEGs and enrichment terms among the different ages.
Statistical analysis
We used R to compute a linear regression model for all mouse and human data sets, i.e. N = 89 mouse and N = 414 human lung samples. Genes with a p -value < 0.05 and R2 > 0.4 were considered statistically significant. Furthermore, we used Bioconducter to perform statistical computations and applied the “Shapiro–Wilk” test for normality to evaluate the data distribution, and the “Bartlett” test to assess the homogeneity of variance for time dependent gene expression changes. If the data were not normally distributed, we used the “Kruskal–Wallis” and “Wilcoxon rank-sum” test (Mann–Whitney-U-Test) for significance testing. Specifically, the “Kruskal–Wallis” test provided an indiscriminate estimate of time dependent changes in the expression of gene markers of different cell types of the lung where as the “Wilcoxon rank-sum” test discriminated among the different age groups. Furthermore, we assessed the distribution of gene markers for pulmonary resident cells with the Kolmogorov–Smirnov test (see ssGSEA) which assesses the distribution of gene markers by comparing their position and distribution in individual lung samples from young and aged mouse and human pulmonary tissue samples.
The data are visualized as violin- and boxplots with interquartile range (IQR) and 1.5*IQR whiskers with outliers defined as data points outside the whiskers. A p -value of < 0.05 is considered significant. | Results
Figure 2 depicts the workflow of the data analysis. The process is divided into data retrieval, normalization, statistical testing for DEGs, gene enrichment analysis and single cell RNAseq. Meanwhile, we divided the mouse and human lung genomic data into a test and validation set and performed independent computational analysis.
Mouse genomic data sets
We considered 89 individual mouse genomic data sets for in-depth evaluation (GEO database GSE83594, GSE66721, GSE55162, GSE34378, GSE38754, GSE23016, GSE25640, GSE18341, GSE15999, GSE14525, GSE11662, GSE10246, GSE9954, GSE6591, GSE3100) and show the distribution of genomic data according to sex and age in supplementary Figure S1 and supplementary Table S 1 .
Based on N = 43 individual male and N = 46 female mice genomic data sets (supplementary Table S 1 ), we searched for time dependent gene expression changes. The linear regression model fitted 105 significantly regulated genes. Note, we only considered DEGs which fulfilled the criteria: FDR adjusted p-value of 0.05 and a goodness of fit of R2 > 0.4. We also considered sex differences, and this revealed 690 (305 up, 385 down) and 170 (144 up, 26 down) genes specifically regulated among male and female mice. Note, 68 genes are common to both sex. We performed gene enrichment analysis, and for males enriched terms are hemostasis, positive regulation of cell adhesion, positive regulation of cell migration, regulation of MAPK cascades, platelet activation, ECM organization, chemotaxis, blood vessel development, adaptive immune system. Likewise, for females enriched terms are inflammatory response, leukocyte chemotaxis, cell activation, nerutrophil degranulation, regulation of ERK1 and ERK2 cascade. Although we observed sex dependent gene expression changes with age, the gene ontology enriched terms are similar and alert to inflammation, immune response, ECM remodeling, cell adhesion among others. The Metascape enriched terms specifically for up- and down-regulated genes are shown supplementary Figure S2 .
We also evaluated time dependent gene expression changes by excluding animals aged < 5 weeks. Independent of sex, the regression model fitted 90 DEGs whose expression changed with time. Moreover, we compared DEGs resulting from the two regression models, i.e. with and without animals aged 1–5 weeks and found 76% to be in common. Therefore, from the very beginning of life, 80 pulmonary genes (68 up, 12 repressed genes) are continuously regulated with age, and we show examples of significantly regulated genes ( p < 0.05, R2 > 0.4) in supplementary Figure S 3 .
Additionally, we analysed the genomic data with the LIMMA package (= Linear Models for Microarray Data) and compared animals aged 1–5 weeks ( N = 15), with animals aged 6–26 ( N = 52) and 52–130 ( N = 22) weeks. This revealed 120 and 134 genes, respectively as continuously up- and down-regulated with age. Furthermore, 38 genes are common when the results from the linear regression model and LIMMA were compared.
Human lung genomic test set data
We retrieved RNA-Seq data of 107 histologically proven normal lung tissue samples from the TCGA repository, i.e. resection material from lung cancer patients (supplementary Table S2 ). The cohort consisted of individuals aged 42–86 years and based on normalized counts, the linear regression model fitted 237 significantly regulated genes (27 up and 210 down). We also analysed the RNAseq data with the DESeq2 package and compared individuals aged 42–50 ( N = 5) to 71–86 ( N = 40) years. This defined 1430 genes (716 up, 714 down) with a FC ≥ 1,5 and an FDR adjusted p -value < 0.05. Note, 56% of genes coming from the linear regression model overlap with significantly regulated genes as defined by the DESeq2 method (supplementary Figure S4 ).
Human lung genomic validation data set
We retrieved genomic data sets of histologically normal lung tissue from 23 human sudden death donors (GSE1643) in addition to 284 individuals who underwent lobectomy for lung adenocarcinoma (GSE71181). We only considered genomic data of histologically proven normal lung parenchyma. Together, the validation set consisted of 307 individual genomic data sets (supplementary Table S2 ). Based on linear regression analysis the model fitted 857 up- and 816 down-regulated genes.
Moreover, we used LIMMA to identify significantly regulated genes and compared two age groups, i.e. 21–50 years ( N = 27) and 71–85 years ( N = 89). We applied the criteria FC ≥ 1,5 and an FDR adjusted p -value < 0.05 and this defined 20 and 4 genes, respectively as significantly up- and down-regulated. Furthermore, all genes identified by LIMMA are significant in the linear regression model.
Lastly, we compared the human test and validation set, and based on the regression model findings 26 genes are commonly regulated.
Gene ontology and gene set enrichment analysis (GSEA)
We used two different approaches to group DEGs based on gene cluster analysis and gene ontology terms. Depicted in Fig. 3 A and B are the Z-scores for all 89 data sets and for 74 mice aged 6–26 and 52–130 weeks. We show significantly regulated genes obtained by the linear regression model and discuss their relevance for aging below. Similar, Fig. 3 C and 3D are heatmaps for a subset of human test ( N = 17) and validation sets ( N = 51). With mice and the human test set, the data are mostly separated by age while for the human validation set some adult lung samples are intermingled with the aged ones.
Depicted in Fig. 4 A&4B are enriched GO terms for mice aged 6–130 weeks. We separated up- and down-regulated genes and analyzed the data with Metascape, David and g:Profiler. We only considered enriched terms with a FDR adjusted p -value < 0.05. Based on 579 age increased gene expression changes as determined by the regression model ( p < 0.05, R2 > 0.2), we identified 26 overlapped terms between the Metascape, David and g:Profiler (Fig. 4 A). 579 upregulated genes mapped to the terms cell activation, inflammatory response, regulation of cytokine production, endocytosis, calcium-mediated signaling, positive regulation of angiogenesis, lipopolysaccharide-mediated signaling pathway (Fig. 4 A). Similar, we mapped 223 repressed genes (Fig. 4 B) to GO terms, and although less than 10% overlapped (supplementary Figure S 5 ) most terms were similar. Among the common GO terms, we wish to highlight ECM organization, cell adhesion, blood vessel development, collagen fibril organization, transmembrane receptor protein tyrosine kinase signaling pathway and transforming growth factor beta receptor signaling pathway. Moreover, we considered GO terms overlapped by two different annotation tools, and given the focus of our study, i.e. genes regulated in the aging lung, we selected top GO terms linked to lung biology (Fig. 4 B). About 102 and 140 down-regulated genes, respectively overlapped between Metascape and g:Profiler and David and g:Profiler and significantly enriched terms are regulation of cellular response to growth factor stimulus, regulation of cell-substrate adhesion, vasculature development, positive regulation of integrin/mediated signaling pathway, cell-substrate adhesion, cell junction organization, regulation of signal transduction and cell migration.
Additionally, we performed gene enrichment analysis for the human validation set of N = 307 individuals. The linear regression model fitted 857 up and 816 down-regulated genes. Significantly enriched GO terms for age-dependent increased gene expression changes include extracellular matrix organization, positive regulation of cell motility and cell migration, leukocyte migration, inflammatory response, elastic fiber assembly, collagen fibril organization and positive regulation of apoptotic response (Fig. 4 C). Likewise, for repressed genes and based on DAVID annotations, enriched terms are tight junction assembly, cellular response to reactive oxygen species, PPAR signaling pathway and lipid transport (Fig. 4 D).
Mouse gene set enrichment analysis
Apart from GO terms, we performed gene set enrichment analysis (GSEA) for 89 mouse lung genomic data sets. The basic algorithm is described in [ 31 ] and based on the Kolmogorov–Smirnov test, which assesses the distribution of gene markers by comparing their position and distribution (rank order) in individual lung samples from young and aged animals. We report the normalized enrichment score (NES), p-value and adjusted p -value in supplementary Table S 3 , and according to NES, cell cycle, cell adhesion, collagen-fibril organization, extracellular matrix structural constituent and morphogenesis of epithelium are enriched in animals aged 1–5 ( N = 15) as compared to 6–26 ( N = 52) weeks. We likewise compared animals aged 6–26 ( N = 52) vs 52–130 ( N = 22) weeks, and the GSEA defined blood vessel development, basal lamina, integrin binding, cellular response to vascular endothelial growth factor stimulus, positive regulation of autophagy as significantly enriched. Conversely, fatty acid transport, platelet activation, scavenger receptor activity, extracellular space, B-cell activation and antigen processing and antigen processing and presentation of exogenous peptide antigen via MHC class II were enriched terms for aged animals.
Time resolved gene expression changes in the aged lung of mice
We searched for continuous gene expression changes and based on a linear regression model for 74 individual pulmonary genomic data sets (ages 6 – 130 weeks) we identified 77 up- and 13 down-regulated genes whose expression changed with age (Fig. 5 A1 and 5D1 and supplementary Table S 4 ). The results for all 89 individual mouse genomic data sets (ages 1–130 weeks) are summarized in supplementary Figure S 6 . The boxplots shown in Fig. 5 A2 depict significantly increased gene expression changes, and the genes are grouped based on their functions. For instance, we observed 6 genes coding for adaptive immune response among 74 individual animals whose expression continuously increased with time (Fig. 5 A2). Therefore, the boxplot comprises a total of 444 individual gene expression changes (= 74 animals × 6 genes). Strikingly, 49% of the continuously upregulated genes code for immune response, and apart from adaptive immune and inflammatory responses, we observed an age-related increased expression of genes coding for cellular response to chemokine and immunoglobulin production. Furthermore, the ontology terms highlighted leukocyte cell activation and regulation of cell adhesion, and shown in Fig. 5 B are individual gene expression changes. To independently confirm the results, we interrogated single cell RNAseq data, and the results are summarized in Fig. 5 C. Here we compared 12 week old animals with aged mice (= 96 weeks) and assessed the expression pattern of 37 immune response genes (Fig. 5 C) in alveolar and interstitial macrophages, interstitial fibroblast, dendritic cells, CD4 + and CD8 + T-cells, B-cells as well as AT2, ciliated, club, goblet and NK cells. We confirmed an age-related induced expression of 27 genes and therefore validated 73% of immune response genes in RNAseq data of single cells isolated from lung tissue of mice. Note, the single cell RNAseq data comprises an average of 500–1000 cells (see GSE124872). The genomic data signify upregulation of gene markers linked to alveolar and interstitial macrophages, B-, CD4 + and CD8 + T cells. Indeed, a recent review highlighted the importance of age-related changes in the pulmonary adaptive and innate immune system [ 32 ].
Based on the David tool, 83% of the significantly upregulated genes can be grouped according to gene ontology terms. An additional 4% of upregulated genes were assigned to ontology terms defined by Metascape and the g:profiler, and the associated DEGs code for protein binding, immune response and extracellular space. For the remaining genes the software tools did not provide meaningful terms even though some important genes are worthy to mention. For instance, we observed significant regulation of alpha-N-acetyl-neuraminide alpha-2,8-sialyltransferase 6 ( St8sia6 ), i.e. an enzyme that catalyzes the transfer of sialic acid to carbohydrates to influence cell–cell communication and adhesion. Its expression increased 1.8- and 3.3-fold in animals aged 6–26 and 52–130 weeks, respectively when compared to 1–5 week old animals. Another gene of interest is the membrane-spanning 4-domains, subfamily A, member 6B (Ms4a6b) which is expressed by regulatory T cells and functions as a co-stimulatory molecule to amplify antigen signals thereby modulating T-cell function [ 33 , 34 ]. Indeed, this CD20 like precursor is induced (> 3-fold) when week 6–26 aged mice are compared to 1–5 week old animals. Its expression increased further when compared to aged animals. Another example refers to ecotropic viral integration site 2A, i.e. a leucine-zipper transmembrane protein [ 35 ] and protoncogene with multiple functions in tumor growth and inflammation [ 36 , 37 ] as well as mitogenic MEK/ERK signaling in osteosarcoma [ 37 ]. Based on multi-omics network modeling Evi2a influenced ECM receptor interaction and focal adhesion [ 38 ], and we found Evi2a transcripts to continuously increase in expression. Additionally, the age-dependent regulation of the core component of nucleosome H4 clustered histone 17 ( Hist1h4m ) increased with age.
Apart from upregulated genes we searched for continuously down-regulated genes over time, and the results from the linear regression model are shown in Fig. 5 D-E. Based on gene annotations the associated DEGs code for extracellular matrix and collagen fibril organization, respiratory system development, response to growth factors, blood vessel development and ossification (Fig. 5 D). We classified 79% of continuously down-regulated DEGs with the David tool, and by querying the Metascape and the G:profiler software we assigned an additional 7% of DEGs to extracellular matrix structural components. Nonetheless, some DEGs were not allocated to specific ontology terms even though their regulation is of great significance. For instance, we found the gene coding for nicotinamide nucleotide transhydrogenase (Nnt), i.e. an enzyme that transfers protons to NAD + and NADP + repressed by 2-fold, respectively when animals aged week 1–5 were compared to aged mice. NADPH is a central cofactor for many biochemical processes and is required for the production of glutathione. Importantly, glutathione is one of the most important cellular defense lines in the detoxification of reactive oxygen species (ROS) [ 39 ] and also functions in inflammatory responses. Overexpression of Nnt in macrophages reduced intracellular ROS and the production of pro-inflammatory cytokines. Conversely, knockdown of Nnt increased intracellular ROS and inhibited cell proliferation [ 40 ]. Together, we consider the time dependent repression of Nnt as detrimental as it will dampen efficient ROS detoxification thereby contributing to cellular senescence.
A further example relates to Lys-Asp-Glu-Leu (KDEL) endoplasmic reticulum protein retention receptor 3 (Kdelr3), i.e. a seven-transmembrane-domain receptor localized within the Golgi complex. The binding of KDEL ligand to its receptor triggers phosphorylation of the Src-family of kinases, and the phosphorylation of p38 stimulates the activation of MAPKs [ 41 ]. Given that Kdelr3 was 2-fold repressed in aged mice, we consider this change in gene expression as an adaptive response to age-related stress responses and inflammation.
Similar, we found 2-fold repressed hephaestin in aged mice. The gene codes for an exocytoplasmic ferroxidase and plays an essential role in cellular iron homeostasis. Hephaestin favours iron export from cells and the protein is expressed in epithelial cells of the alveoli, type II pneumocytes, the bronchiole and endothelial cells [ 42 ]. Due to its role in iron transport, we considered the repression of hephaestin to be detrimental as malfunction of the protein will lead to an iron-overload of pneumocytes and the propagation of reactive oxygen species (ROS). The importance of hephaestin in lung cancer iron homeostasis was the subject of a recent report, and reduced expression of hephaestin is associated with poor prognosis for lung adenocarcinoma and squamous cell carcinoma [ 43 ].
Regarding the regulation of genes coding for extracellular matrix a complex picture emerges with some being time dependently repressed, notably matrix metalloproteinase (Mmp)-2 , Mmp14 , the disintegrin and metalloproteinase with thrombospondin motifs (Adamts)2 , while others (e.g., Lrg1 , Dcn ) were upregulated especially in animals aged 6–26 week when compared to old ones. The marker genes code for MMP and ADAMTS, and these function in ECM remodeling which we discuss in the following paragraph.
Time resolved gene expression changes in the aged human lung
We searched for continuous gene expression changes in the pulmonary genome of the aging human lung and show in Fig. 6 the results for the validation set of 307 human lung samples. Given the primary aim of our study, we focused on the following GO terms for an age-related increase in gene expression changes: Naba core matrisome, extracellular matrix organization, degradation of the extracellular matrix, inflammatory response pathway and collagen fibril organization (Fig. 6 A-B). The results for an age-related down-regulation of genes are shown in Fig. 6 C-D. Here, we focused on the following GO terms: Cell morphogenesis, microtubule cytoskeleton organization, tight junction assembly, lipid transport and PPAR signaling.
ECM remodeling in the aged lung of mice
To better understand the remodeling of extracellular matrix in the aging lung, we evaluated the expression of genes coding for collagens, proteoglycans, matrix metalloproteinase and their inhibitors in addition to glycoproteins over time. We considered the genomic data sets for N = 89 mice, and based on the linear regression model and the Limma statistical test we determined statistical significance for some ECM coding genes whose expression pattern followed a V-shape (Fig. 7 A-C) in addition to continuously increased (Fig. 7 D-F) and repressed transcript expression over time (Fig. 7 G-I). Moreover, we identified a group of ECM coding genes which only increased in expression when 6–26 week old animals were compared to aged ones (Fig. 7 F); in the same comparison some ECM coding genes declined in expression (Fig. 7 I). Collectively, the genomic data informed on ECM remodeling in the aging lung (supplementary Table S 5 ), and its increased deposition contributes to stiffness [ 44 , 45 ].
V-shaped gene expression changes of ECM coding genes over time
Lung development and its organ function is highly dependent on the coordinated expression of ECM genes. For instance, independent research identified interstitial collagen (collagen I and III) to achieve maximal expression levels at day 7 th post partum, and this coincides with postnatal alveologenesis [ 45 ]. In the present study, several collagens followed a V-shape expression pattern, i.e. Col2a1 , Col6a4 , Col9a1 , Col10a1 , Col11a1 , Col11a2 , Col15a1 and Col25a1 , and the majority codes for proteins of the lung scaffold. However, we were surprised to see the regulation of Col2a1, Col9a1, Col10a1, Col11a1 and Col11a2 in the lung given its major role in cartilage matrix biology. Notwithstanding, independent research reported a 4-fold upregulation of Col2a1 especially in aged mice [ 28 ] while in the present study a 2-fold upregulation was computed. Note, in the human lung upregulation of COL10A1 is part of the ECM remodeling, especially in lung cancer patients, and this collagen stimulates cell proliferation [ 46 ]. Alike, independent research revealed Col10a1 as 1.9-fold increased in aged mice [ 5 ], which is similar to our data, i.e. 2.2-fold. Furthermore, collagen type V and VI are structural components of the connective tissue of the surrounding vascular and bronchial walls [ 47 , 48 ] and are enriched in fibers of fibrotic lesion [ 47 ]. With age their augmented expression resulted in more dense and thick ECM fibrils. Therefore, the accumulation of these collagens contributes to lung stiffness [ 47 ]. Similarly, the lamina reticularis of the airway becomes thickened by the enhanced deposition of collagen I and V which limits airway gas exchange. It is part of an airway remodeling that eventually results in damage of the alveolar walls and the occurrence of emphysema [ 49 ]. The formation of heterotypic fibrils also included collagen I and collagen III, i.e. major collagens of the pulmonary ECM which declined with age (Fig. 7 G). In fact, collagen VI forms a unique beaded-filament structure [ 50 ] and acts as a binding element between collagen I/III fibrils and basement membranes [ 48 , 51 ]. Additionally, collagen VI promotes cell adhesive properties of the ECM [ 52 ] and contributes to epithelial cell homeostasis [ 50 ]. During lung fibrosis the expression of collagen VI is increased and typically forms fibrils with collagen III [ 48 ].
In the present study, the transcript expression of Col6a4 increased whereas Col6a1, Col6a2 and Col6a3 decreased to a similar extent in aged mice (Fig. 7 B and 7H), and independent research confirmed our findings at the protein level [ 5 , 28 ]. Collagen VI is an essential basement membrane component, and Col15a1 acts as a biological ‘spring’ between the basement membrane and the interstitial border. It therefore contributes to the stabilization and protection of the pulmonary structure [ 53 ].
A further example relates to elastin which is highly expressed in the lung and required for alveogenesis. We compared neonatal mice to 6–26 week old ones and observed elastin mRNA expression declined by nearly 90% (Fig. 7 G); and a similar finding was reported for the elastin protein in the lung of aged mice [ 28 , 54 ]. Owing to its function in alveolar walls, a reduced elastin expression results in thinner and fragmented alveolar septa [ 55 ], and independent research showed a similar decline in elastin synthesis with age. As a result, a decreased tissue elastance was reported [ 28 , 56 ], which leads to the regular airspace dilatation in the aging lung [ 57 ]. In fact, emphysema is characterized by reduced elasticity of the lung; however, the alveoli walls are destroyed mainly due to the degradation of elastin fibers [ 58 ]. Although lung compliance increases with age, the tissue elastance and tissue resistance decreases [ 56 ].
Continuously increased expression of ECM coding genes over time
Shown in Fig. 7 D-F are upregulated ECM coding genes, and panel A illustrates the highly significant 3.5-fold upregulation of Leucine‐rich α‐2 glycoprotein ( Lrg1 ). This glycoprotein plays an essential role in angiogenesis by modulating endothelial Tgfβ signaling [ 59 ]. In addition, recent research is suggestive for a link between Lrg1 expression and emphysema and inhibition of Lrg1 protected endothelial cells from vascular rarefaction (= malperfusion of vessels) and alveolar damage [ 60 ]. Additionally, Lrg1 promotes lung fibrosis following bleomycin treatment by influencing Tgfβ signaling in fibroblasts [ 61 ]. Strikingly, Lrg1 ko mice are protected from lung fibrosis when given the same treatment. There is also evidence for Tnfα to induce expression of Lrg1, and therefore Lrg1 can be linked to tissue inflammation [ 62 ]. Over time, we observed a mild but significant increased Tnfα expression, and our study is suggestive for Lrg1 to be a candidate gene for an age-related stiffness of the lung by instructing fibroblast proliferation. Importantly, we observed a 2.4- and 1.7-fold induced expression of the S100 calcium binding protein A4 in 6–26 week and aged mice, and it is of considerable importance that M2 polarized alveolar macrophages secreted S100a4 to stimulate lung fibroblast proliferation [ 63 ]. Correspondingly, inhibitors of S100a4 are explored as a therapeutic target for the treatment of lung fibrosis [ 64 ]. Furthermore, we observed asignificant upregulation of galectin 1 mRNA ( Lgals1 ) in aged mice, and the coded protein induced fibroblast differentiation and increased ECM stiffness through an activation of the PI3-kinase and p38 MAPK pathway [ 65 ]. Indeed, independent research evidenced induction of galectin-1 by Tgfβ to be associated with fibroblast differentiation into myofibroblasts and nuclear retention of Smad2 [ 65 ]. However, in the present study Smad2 was unchanged and Smad3 was nearly 2-fold repressed. In addition, galectin 3 transcript expression increased continuously with age (> 3-fold, Fig. 7 D). This protein functions in several biological processes including programmed cell death and innate immune response and promoted macrophage and fibroblast activity and ECM remodeling [ 66 , 67 ]. Indeed, the Framingham Heart study revealed increased blood galectin-3 levels to be associated with restrictive lung disease and interstitial lung abnormalities [ 68 ].
Over time we calculated a nearly 4-fold increased decorin expression (Fig. 7 D), and this proteoglycan is mainly produced by fibroblasts and functions as an inhibitor of Tgfβ to reduce fibrotic scares [ 69 , 70 ]. In a landmark paper, the antifibrotic function of decorin was clearly established, i.e. transient transgene expression of decorin reduced the fibrotic response to bleomycin in the lung [ 71 ]. Similarly, in severe emphysema, decorin expression is reduced [ 72 ] to possibly aggravate the disease [ 73 ].
Continuous repression of ECM coding genes over time
Shown in Fig. 7 G-I are ECM coding genes whose expression was repressed over time, and the majority code for the basement membrane. Specifically, the basal lamina of basement membranes contains laminin, and this heterotrimeric glycoprotein is composed of the α- (e.g., Lama1, Lama2, Lama4), β- (e.g., Lamb1) and γ-chain (e.g., Lamc1). Laminins are interfilament proteins and form polymers to support basement membrane function and are part of a mechanical scaffold [ 74 , 75 ]. We found the laminins (Lama1, Lama2 , Lama4 , Lamb1 , Lamc1 ), nidogens ( Nid1 , Nid2 ), collagen IV ( Col4a1 , Col4a2 ) and tenascin C ( Tnc ) to be repressed up to 6.5-fold, and all of these genes code for ECM of the basement membrane (Fig. 7 G-I). Other investigators also reported repressed expression of components of the basement membrane. For instance, Godin and colleagues [ 28 ] cultured human bronchial epithelial cells and lung fibroblasts on ECM derived from young and aged lungs. The cells cultured on ECM derived from the lung of aged mice resulted in reduced expression of laminins, especially laminin α3 and α4. In fact, most laminins in the lung of aged mice were repressed as evidenced by mass spectrometry, and these findings confirm the results of the present study. Together, the gene expression changes contributed to age-related ECM remodeling and provide a molecular rationale for age-related decline in lung function. A further example refers to the glycoprotein nidogen which forms a scaffold with laminins and collagen IV in the basal lamina and supports epithelial cell-fibronectin interactions [ 76 ]. Moreover, research confirmed reduced laminin, nidogen and collagen VI protein expression in the aging lung, and this signifies the remodeling of the basal lamina to possible influence tissue regeneration [ 5 , 28 ].
Additionally, the glycoprotein tenascin C (Tnc) is part of the ECM of the lamina reticularis and the matrix influences cell adhesion and proliferation [ 77 , 78 ]. Typically, Tnc is highly expressed during branching morphogenesis and alveogenesis, however is barely detectable in neonates as evidenced by immunoblotting on day 21 post partum [ 79 ]. Similarly, we observed a 4-fold decline in Tnc expression over time and consider its regulation as a physiological process.
A further example relates to Mmp2 , Mmp14 and Adamts2 which were repressed to about 20–30% of controls in the lung of aged mice (Fig. 7 G), and these metalloproteinases degrade matrixproteins including collagens (I-V, VII, X-XI), fibronectin, laminin, elastin [ 80 ]. The regulation of matrix metalloproteinases in the lung have been the subjected of several reviews [ 81 ], and Mmp2 and Mmp14 are mainly expressed during lung development but decline sharply after birth. In fact, Mmp14 activates Mmp2 and apart from their role in lung morphogenesis such as lung alveolar septae development [ 82 ] research demonstrated Mmp2 deletion to reduce cellular infiltration and fibrosis in an allotransplant model [ 83 ]. Similarly, Mmp14 is highly expressed on lung endothelial cells and degrades collagen I [ 84 ], which supports the migration of cells through dense connective tissue while Mmp2 degrades elastin and collagen fibers and therefore contributes to ECM remodeling. Moreover, we observed repressed Adamts2 expression, and this disintegrin and metalloproteinase with thrombospondin motifs cleaves the propeptide region of collagen I, II and III and therefore allows collagen fibril formation and maturation. Its decreased expression in aged mice was also observed by other investigators [ 85 ], and owing to its function, we speculate a defective ECM turnover activity and changed ECM composition that will affect airway biology in different ways, i.e. cell adhesion, cell signaling etc.
Unlike the human lung (see below), we observed significantly repressed transcript expression of Col1a1 and Col3a1 in the aged lung of mice, and our findings agree with the data reported by other investigators [ 29 , 85 – 88 ].
ECM remodeling in the aging human lung
As described above, we identified 26 genes commonly regulated between the human test and validation set ( N = 307) (supplementary Table S 6 ). Fourteen code for ECM components, notably several collagens, i.e. COL1A1, COL1A2, COL3A1, COL6A1, COL7A1, COL9A2, COL14A1, COL15A1, COL16A1, COL17A1 and the collagen triple helix repeat containing 1 ( CTHRC1 ), all of which were significantly upregulated in aged individuals. The results for the human test and validation set are shown in Fig. 8 A-8D, and the data indicates increased extracellular matrix deposition with age. Given the significant upregulation of COl1A1 and COL1A2 in lung tissue of aged individuals, we obtained evidence for an increased expression of type I collagens during aging. Note, collagen type I forms a triple helix of two COL1A1 and one alpha 2 chain (COL1A2) and is abundantly expressed in the lung [ 89 ] where it contributes to the rigidity and elasticity of the pulmonary framework. Among the different cells of the lung, activated fibroblasts or myofibroblasts produce high levels of collagen, especially collagen type I and III, and typically the fibroblasts are located in the interstitial space around the alveolar septae and become activated in the process of wound repair. Additionally, mesenchymal cells, smooth muscle cells [ 90 ], pericytes [ 91 ] and macrophages [ 92 ] are able to synthesize collagens; however, they do not play a major role in wound repair. Initially, collagen type III is deposited in the early course of wound repair, followed by collagen type I to prepare a more rigid fiber network [ 93 ]. In the current study we found collagen 3A1 upregulated in aged individuals and therefore confirm the notion of an increased expression of collagens with age. Notwithstanding, the remodeling of the ECM appears to be selective in regards to the collagens involved. Comparable results were reported for mice [ 29 , 86 ].
A further interesting result relates to the induced expression of CTHRC1 (Fig. 8 C, test set and 8D, validation set). This protein limits collagen matrix deposition and promotes cell migration as shown in studies with balloon-injured arteries of rats [ 94 ]. The protein is secreted in response to injury and blocks excessive collagen matrix deposition. As of today, the role of this protein in normal lung biology is not well understood; however its upregulation in the lung following exposure to bleomycin has been reported, and single cell transcriptome analysis evidenced induced Cthrc1 -expression to be confined to fibroblasts while other cells of the lung do not express the protein [ 95 ]. Moreover, Cthrc1 ko mice were protected from lung fibrosis following bleomycin treatment [ 96 ]. Furthermore, Caporarello and colleagues reported young and aged mice to differ in the expression of collagens following bleomycin treatment with aged mice expressing Col1a1 protein more abundantly [ 97 ]. This implies age-related differences in the wound repair. Collectively, its upregulation in aged individuals indicates a protective mechanism to a life time exposure to airborne pollutants and other materials causing inflammation and the associated wound repair. Notwithstanding, Cthrc1 also supports migration and adhesion activities of certain cancer cells [ 98 ].
In aged individuals, we found COL6A1 nearly 2-fold increased and independent research confirmed its upregulation in a combined transcriptome and proteome study of 9 human lung tissue samples [ 99 ]. Therefore, our data agrees with findings reported by others, and this collagen is a structural component of the basement membrane. Strikingly, soluble collagen VI protected fibroblasts from apoptosis in serum-starved cultures by inhibiting the activity of the pro-apoptotic protein bax [ 100 , 101 ]. It is tempting to speculate that the upregulation of this collagen in the lungs of aged individuals protected fibroblasts from cell death. We also observed an age-dependent increased expression of COL7A1, i.e. an anchoring fibril protein and structural component of the basement membrane (Fig. 8 A). A further example relates to the about 2-fold induced expression of COL15A1 and COL16A1 . These collagens connect the basement membrane of the respiratory epithelium with the surrounding connective tissue.
Recently, studies with human airway smooth muscle cells from COPD (n = 7) and non-COPD (n = 7) susceptible smokers demonstrated TGFβ treatment of cell cultures to significantly stimulate COL15A1 transcript expression, and the induced gene expression is modulated by histone H4 acetylation. Correspondingly, there are epigenetic control mechanisms in the ECM remodeling of the aging lung [ 102 ]. Furthermore, COL16A1 stabilizes collagen fibrils and anchoring microfibrils of the basement membrane, and its upregulation highlights the ECM remodeling related to the basement membrane. A proteomic study revealed Col16a1 19-fold induced in the lung of aged mice [ 5 ], and in the present study we observed a 2-fold induced expression for the aged human lung. Interestingly, in a very recent study, age-associated differences in the human lung extracellular matrix were reported [ 103 ], and the results agree with our findings for COL1A1 , COL6A1 , COL14A1 and FBLN2 .
Additional examples of significantly regulated DEGs include adipocyte enhancer-binding protein 1 ( AEBP1 ), fibulin 2 ( FBLN2 ) and coiled-coil domain containing 80 ( CCDC80 ) . However, CCDC80 was only significant in the human test set data. Specifically, AEBP1 is a protein secreted by fibroblast and myofibroblasts [ 104 ] and functions in wound healing and myofibroblast differentiation. It is of considerable importance that Aebp1 deficient mice are protected from lung injury following bleomycin treatment [ 104 ], and in the present study, we observed a nearly 2-fold upregulation of AEBP1 in aged individuals. For its role in augmenting myofibroblast activation, i.e. smooth muscle actin (SMA) expression and collagen deposition [ 104 ], we view its upregulation as a sign of defective wound repair in aged individuals that have been exposed for a life time to airborne pollutants and other harmful agents and materials. AEBP1 may also qualify as a marker of lung stiffness.
Another example relates to fibulin 2, and this secreted glycoprotein functions in elastic fibers of alveolar interstitium, airway and vessel walls, and has been reported to stabilize the basement membrane of mammary epithelium [ 105 ]. Interestingly, studies in ko mice revealed fibulin 2 not to be required for elastic fiber formation [ 106 ], and therefore a functional redundancy between fibulin members exists. We observed a significant upregulation of fibulin 2 in aged individuals and likewise elastin was significantly increased which implies a coordinate expression of both fibers with age. Note, the results are opposite to that obtained for mice (see above).
Lastly, we wish to highlight the regulation of coiled-coil domain containing 80, and this protein functions in matrix assembly and promotes cell adhesion while loss of Ccdc80 negatively modulates glucose homeostasis in diet-induced obese mice [ 107 ]. CCDC80 is a component of the basement membrane and regulates matrix assembly and cell adhesion through binding with ECM molecules as shown in the 293T cell line [ 108 ]. We observed increased pulmonary expression of CCDC80, and a similar increase in plasma CCDC80 protein expression has been reported for aged individuals [ 109 ]. In fact, ongoing studies evaluate CCDC80 as a prognostic biomarker of multimorbid individuals [ 110 ].
Because of its structural domains, CCDC80 gained considerable interest as a thioredoxin-like antioxidant which may function as a tumor suppressor in some organs [ 108 , 111 ]. Gene knockdown of ccdc80 in zebrafish caused decreased expression of Col1a1 while immunohistochemistry studies in a rat model of pulmonary hypertension demonstrated Ccdc80 protein to be induced, thus implying different roles of this protein in the pathogenesis of pulmonary hypertension and vascular remodeling [ 112 ]. Furthermore, a study with Ccdc80 ko mice revealed this protein to function in lipid metabolism, and cytokine-cytokine receptor interactions which implies a role in inflammation [ 113 ].
Comparison of ECM regulated genes and coded proteins in the aging mouse lung
Apart from considering ECM gene expression changes, we also determined whether ECM regulated genes are regulated at the protein level in lung tissue of mice. In this regard the data of Schiller and colleagues were highly informative [ 58 , 114 ]. In fact most of the ECM coding genes reported in the presented study are also regulated at the protein level as evidenced by mass spectrometry. Apart from their identification, we wished to rank the relative abundance of the ECM coding genes by normalizing the signal intensities to a set of housekeeping genes, which were constantly expressed with age. We considered the expression of about 798 housekeeping genes (supplementary Table S 7 ) which remained constant over time and inferred changes in the composition of the ECM based on the changes in transcript abundance relative to the housekeeping genes. Specifically, we considered the data of Booth and colleagues [ 115 ] who identified 96 proteins in the matrisome of the normal mouse lung, and based on other published works [ 58 , 80 , 115 ] compiled 256 proteins that code for extracellular matrix of which 156 are significantly changed at the transcript level over time (see supplementary Table S 8 ). Among the age-related gene expression changes, we identified 63 as upregulated (Fig. 7 A-F) of which decorin is an interesting example. Its relative abundance increased consistently over time and as detailed above, decorin functions as an inhibitor of Tgfβ to reduce fibrotic scares [ 69 , 70 ]. Another example relates to the microfibril associated proteins Mfap-2 and Mfap4, and these glycoproteins colocalizes with elastin and decreased continuously during the aging process (Fig. 7 G). The relative changes in transcript abundance for Mfap2 were 1.5%, 1.2% and 0.9% when animals aged 1–5 were compared to 6–26 week old mice and old mice, respectively. Our data is in agreement with the original works of Mecham and Gibson [ 116 ] who reported Mfap2 expression to be highest in neonatal mice. It is interesting that Mfap2 knockouts are almost indistinguishable from wild type mice, and the knockouts form microfibrils with elastic fibers [ 116 , 117 ]. Alike, Mfap4 changed with age from 5.1% to 3.5% and 2.4%, and although Mfap4 is one of the most abundant ECM components it declined over time. Our findings agree with the results reported by Burgstaller and colleagues [ 58 ]. The fact that Mfap4 declines over time is puzzling. Essentially, studies with Mfap4 knockouts revealed a significant decrease in alveolar surface area by 25%, and mice developed emphysema-like changes at the age of 6 months [ 118 ]. Interestingly, research dating back more than 30 years already demonstrated a decline in alveolar surface area during life [ 119 ]. Furthermore, the elastin content differed significantly between 3 month old wt and Mfap4 ko mice even though electron microscopy did not evidence altered elastic fiber organization. In the present study we calculated an elastin content in the lung of about 3.6% for 1–5 week old mice which is similar to the protein expression data reported by Mecham [ 120 ]. However, over time we observed a drastic decline in elastin gene expression by nearly 10-fold (Fig. 7 G) and therefore investigated the regulation of Tgfβ and Smad proteins, which are known to influence elastin transcription [ 121 ]. Essentially, transcript expression Eln and Smad3 were positively but loosely correlated (R 2 = 0.4) while Eln and Smad6 were negatively correlated (R 2 = 0.2). Smad3 is a mediator of Tgfβ signaling [ 122 , 123 ] and deficiency of Smad3 represses tropoelastin expression. Conversely, Smad6 inhibits Tgfβ signaling [ 122 , 123 ], and we observed a positive correlation between Smad6 and Tgfβ transcript expression ( R = 0.7).
Senescence and senescence-associated secretory phenotype (SASP)
Cellular senescence is characterized by a stable cell cycle arrest [ 124 ], and the link between cellular stress responses and senescence has been the subject of several reviews [ 125 – 128 ]. For instance, replicative senescence may result from progressive telomere shortening, and at a critical telomere length the resultant genomic instability causes an activation of DNA damage programs. Additionally, premature senescence can arise from oxidative genotoxic stress and lack of homeostasis [ 129 ]. Typically, senescent cells fail in the proper functioning of metabolism, and with age the senescence-associated secretory phenotype (SASP) becomes activated [ 130 ]. The SASP comprises a range of inflammatory molecules including interleukins, chemokines, growth factors, proteases and in their review, Coppe et al. emphasized the multiple stimuli that provoke senescence, i.e. chromatin instability, telomere dysfunction, overexpressed cell cycle inhibitors, non-telomeric DNA damage and other stress signals [ 128 ]. Similarly, the works of Kumar et al. focused on the possible role of SASP in COPD [ 131 ], and based on these reviews, we assembled a list of SASP coding genes. We found 53 and 22 genes as significantly regulated, of which 31 and 12, respectively were upregulated in the lungs of mice and humans (supplementary Table S 9 ).
In the aging lung of mice most SASP molecules code for inflammation, and examples of significantly upregulated genes include the interleukins Il1b, Il7, chemokines Ccl8, Ccl11, Ccl13, Ccl20 , C-X-C motif chemokine ligand 2, 13 ( Cxcl2, Cxcl13 ), the matrix metalloproteinase Mmp12, Mmp13, interleukin 6 cytokine family signal transducer, angiogenin, insulin like growth factor binding protein 6 and cathepsin B. Conversely, Cxcl15 , hepatocyte growth factor, the insulin growth factor binding protein 4, the metalloproteinase Mmp14 and Fn1 were > 2-fold repressed. Collectively, we considered 71 SASP molecules (supplementary Table S 9 ) of which 19 were regulated by ≥ 2-fold. Their regulation signifies inflammation and inhibition of growth. Specifically, Ccl8, Ccl11, Ccl13, Ccl20 function as chemoattractantants and recruit monocytes/histiocytes to sites of injury. These chemokines bind to the chemokine receptor 2, 3, 5 and 6 and display partially overlapping functions. Nonetheless, there are also important differences with Ccl8 and Cc13 functioning as a chemoattractant for monocytes, eosinophils and T cells, whereas Ccl20 stimulates the homing of lymphocytes and dendritic cells to senescent airway epithelial cells.
A further example relates to the C–C Motif Chemokine Receptor 2 (Ccr2), which is predominantly expressed in macrophages, fibroblasts and endothelial cells. We observed nearly 4-fold increased Ccr2 expression in aged mice when compared to 1–5 week old ones. Additionally, we observed an induced expression in Ccl2 and this cytokine binds to the Ccr2 receptor and attracts myeloid and lymphoid cells to sites of inflammation [ 132 ]. Indeed, treatment of radiation-injured mice with an Ccl2 inhibitor as well as studies in Ccr2 knockout mice evidenced inhibition of Ccl2 signaling to result in reduced lung inflammation, normalization of endothelial cell morphology and vascular function [ 133 ]. Another study reported the senescence of umbilical cord blood-derived MSCs to be dependent on the activity of Ccl2, and this cytokine is epigenetically regulated by the polycomb protein BMI1 [ 134 ]. Furthermore, Ccr2 ko mice exposed to 85% O 2 for up to 6 days were protected against hyperoxia-induced tissue injury though inhibition of iNOS and subsequent ROS production [ 135 ].
Another example relates to the significantly induced expression of C-X-C Motif Chemokine Ligand (Cxcl)-1) . This cytokine attracts neutrophils to sites of inflammation [ 136 ]. In aged mice, we observed a slight increase in the expression of its receptor, i.e. Cxcr2 , and clinical research evidenced its ligand Cxcl1 to be induced in the progression of interstitial pneumonia with autoimmune features [ 137 ]. Moreover, ectopic expression of Cxcr2 caused premature senescence via a p53-dependent mechanism whereas Cxcr2 knockdown reinstalled cell proliferation [ 138 ].
As part of the cellular senescence program, the insulin-like growth factor binding proteins (IGFBP) are regulated, and in the present study, we observed induced expression of Igfbp-2, Igfbp3, Igfbp4, Igfbp6 and Igfbp7 in aged mice and human lung tissue [ 139 – 141 ]. Unfortunately, the exact mechanism by which Igfbps influences senescence are unknown and genetic loss-of-function-studies are inconclusive [ 142 ]. For instance, Igfbp3 , 4 , and 5 triple knockouts resulted in a 25% reduction in body growth, decreased fat accumulation, but enhanced glucose homeostasis [ 142 ]. Likewise, through binding with insulin like growth factor (IGF)-1, IGFBP2 inhibits the binding between IGF1 and insulin like growth factor 1 receptor (IGF1R), and this reduces cell survival and mitogenesis [ 143 ]. One study reported induction of cellular senescence by Igfbp5 through a p53-dependent mechanism [ 144 ]. On the other hand, overexpression of Igfbp6 delayed replicative senescence of human fibroblasts and might therefore be a negative regulator of senescence [ 145 ].
Although we already addressed the importance of extracellular remodeling in the aging lung (see above), we wish to highlight the marked induction of the matrix metalloproteinase Mmp3 , Mmp10 , Mmp12, Mmp13 and its inhibitor Timp1 in the lung of aged mice and humans. The up to 5-fold induced expression of these metalloproteinases implies a senescence related change in ECM composition with obvious implications for its mechanical properties. Moreover, ECM remodeling facilitated extravasation and migration of inflammatory cells [ 81 , 146 ]. Thus, senescence is hallmarked by various inflammatory responses, and in the present study we identified Ccl11 ( Eotaxin ) and Cxcl13 ( Blc ) as highly regulated in the aging lung of mice (supplementary Table S 9 ). Their expression increased by 2.3- and 10-fold, respectively and are synthesized by various pulmonary cells including macrophages [ 147 ]. Ccl11 expression increased in mesenchymal stromal cells of aged lung [ 148 ] and attracted eosinophils to sites of inflammation [ 149 , 150 ]. This cytokine mediates the migration of NK and monocytes to senescent cells [ 148 ].
Unlike Ccl11, the proinflammatory cytokine Cxcl13 supports maturation and B-cell homing to inflammatory foci [ 151 ] and augments IgM and IgA responses following cytokine signaling [ 152 ]. It is of considerable importance that the senescence related gene regulations described above were also observed by other investigators with some genes being > 20-fold induced in the aged lung of mice [ 5 ].
Transcription factor networks and chromatin/telomerase modifiers in SASP
Figure 9 depicts a simplified scheme of key transcription factors and chromatin modifiers deregulated during senescence. Undoubtedly, p53 plays a central role in cellular senescence [ 153 ], and it is well established that p53 functions as a guardian of genomic stability. We observed induced p53 expression in the aging lung of mice. Note, ROS induced cellular stress stimulates p53 activity and the expression of antioxidant defense genes (Fig. 9 ). However, if cells are significantly damaged, p53 will support programmed cell death. Therefore, p53 takes on a dual role in senescence, and its activity depends on posttranslational modifications, i.e. acetylation [ 154 ] whereas sirtuin 1 (Sirt1) deacetylases p53 thereby inhibiting its activity. Sirt1 is significantly induced in the lung of aged mice and activation of SIRT1 attenuates inflammaging in chronic inflammatory diseases [ 155 ]. In fact, SIRT1 is down-regulated by autophagy in senescence and ageing [ 156 ].
Among the antioxidant defensive genes, we observed significantly repressed expression of superoxide dismutase and of NADPH oxidase 4. The p53 inducible nuclear protein ( Trp53inp1 ) was nearly 2-fold repressed and this protein acts as a positive regulator of autophagy [ 157 , 158 ]. Furthermore, autophagy-related ubiquitin-like modifier LC3B is significantly repressed (supplementary Table S 10 ), and this suggests an inadequate autophagy in aged lungs. The role of defective autophagy in senescence and disease has been the subject of a recent review [ 159 ]. In senescent cells, the expression of the p53 and cyclin dependent kinase inhibitor 1A (p21) protein is transient [ 160 ] while other cyclin dependent kinase (CDK) inhibitors become activated. However, in the present study the cyclin dependent kinase inhibitor 2A ( p16 ) was basically unchanged (supplementary Table S 10 ), and the cell cycle regulator Cdk1 and p 21 were 2-fold repressed in the lung of aged mice to possibly delay senescence.
Moreover, we observed repressed sestrin 1 and 2 gene expression in the lung of aged mice, and the coded proteins bind directly to aminoacids and inhibit the target of rapamycin complex 1 with major implications for autophagy, i.e. lack of its activation. Therefore, we obtained further evidence for an impaired autophagy during senescence [ 161 ].
Unlike mice, most of the antioxidant defensive genes were unchanged in the human lung, eventhough we observed mildly repressed superoxide dismutase 1 ( SOD1 ) and induced P16 in the aged human lung.
We already emphasized the interplay of chronic inflammation and cellular senescence and next to various histones and p53, the Sirt1 deacteylase also influenced the activity of forkhead box O3 (FOXO3), NFκB and other proteins involved in DNA damage and repair responses [ 162 ]. Specifically, FOXO3 functions in the detoxification of oxidative stress, and research demonstrated Foxo3-deletion to result in ROS damage and ROS-induced reduction of the lifespan of erythrocytes [ 163 ]. Similar results were obtained for the lung and restoration of FOXO3 activity blocked bleomycin-induced fibrosis and reverted the idiopathic pulmonary fibrosis myofibroblast phenotype [ 164 ]. Therefore, FOXO3 is explored as a therapeutic target, and in the lung of aged mice we found Foxo3 significantly increased.
Another target of Sirt1 is the peroxisome proliferator-activated receptor γ coactivator-1α (Pgc1) [ 165 ], and its expression is signficiantly induced in the lung of aged mice. Pgc1 functions as a transcriptional coactivator to stimulate mitochondrial biogenesis, lipid metabolism and to improve ATP production [ 166 ], and the SIRT1/PGC-1α/PPAR-γ signaling pathway is essential in senescence. Importantly, Sirt1 deacetylates Pgc1 thereby stimulating its activity and Sirt1 and Pgc1 counteract senescence [ 167 ]. Therefore, key players in overting senescence are induced in the lung of aged mice [ 168 , 169 ].
A further point of considerable importance is the relationship between telomere size and cellular senescence, and owing to its function, the telomere size of chromosomes is considered to be of critical importance in the control of genomic stability. Its shortening will lead to replicative senescence, although more recent studies challenge this paradigm [ 170 ]. Telomere dysfunction leads to DNA damage response (DDR), and cellular aging [ 171 ].
One study investigated the telomere length of chromosomes in lung tissue of normal aged individuals and patients undergoing lung transplantation, and evidence for its progressive shortening was obtained. However, the relative telomere length did not correlate with regional disease severity [ 172 ]. Furthermore, expression of the telomerase reverse transcriptase ( Tert ), i.e. an enzyme complex that re-elongates shortened telomeres was significantly increased in aged mice. Additionally, grainyhead like transcription factor 2 ( Grhl2 ) and paired box 8 ( Pax8 ) are transcription factors which activate the Tert promoter. Given their increased expression in the lung of aged mice, we obtained evidence for a coordinate regulation of Tert with induced expression of its transcription factors.
Another factor that functions in telomere maintenance and protection against end-to-end fusion of chromosomes is telomeric repeat binding factor (Terf). Both isoforms, i.e. Terf1 and Terf2 were significantly but oppositely regulated in aged mice, and this is suggestive for an impaired telomere maintenance in aged animals and augmented senescence.
Unlike mice, most of the senescence associated genes are not regulated in the human lung. Nonetheless, we identified three upregulated genes in the test and validation sets, and all are p53 responsive genes. Specifically, sulfatase SULF2 is an enzyme which removes 6-O-sulfates from heparan sulfate (HS). This sulfatase is a p53 target gene [ 173 ] and its repression causes an impaired senescence response to genotoxic stress [ 174 ]. In fact, SULF2 transcript expression increased mildly (1,5-fold) but significantly in the lung of aged individuals, and studies in mice ascribe SULF2 a protective role in epithelial cells of the lung following bleomycin induced injury [ 175 ].
The second gene codes for the Odd-Skipped related transcription factor 1 (OSR1, only significant in the human validation set). OSR is mildly but statistically significant upregulated, and its exact function in lung biology has not been established as yet. Nonetheless, it is highly expressed in the human lung [ 176 ] and was reported to inhibit lung cancer proliferation and invasion by reducing Wnt signaling through the suppression of SOX9 and β-catenin signaling pathways [ 177 ].
The third p53 target gene codes for NYN domain and retroviral integrase containing protein. There is only very limited information on the role of this retroviral integrase catalytic domain-containing protein other than it possesses nucleic acid binding ability and we found its expression 1, 7-fold increased in the human test set data.
Age-related changes in the expression of immune cell marker genes
We interrogated four different databases to retrieve gene markers of different immune cells in the lung [ 5 , 178 – 180 ] and searched for the consensus among them (supplementary Table S 11 , S 12 ). Subsequently, we applied single sample gene set enrichment analysis (ssGSEA) and evaluated expression changes of the gene markers over time. The data allowed us to infer changes in the immune cell composition/phenotypes in the aging lung of mice. Given that about 36% of the gene markers are mutually expressed between various immune cells, we also performed the same computations by removing commonly expressed marker genes among the different immune cells and found the results between the two approaches to be comparable.
We evaluated age-related changes of marker genes on the basis of the enrichment score for individual immune cells (supplementary Table S 13 ) and confirmed independently the results by interrogating single cell RNAseq data for pulmonary resident cells. We tested statistical significance with the Kolmogorov–Smirnov test, which assess the distribution of gene markers by comparing their position and distribution in individual lung samples from young and aged animals. Furthermore, we considered the distribution of enrichment scores, and if not normally distributed, adopted a non-parametric “Wilcoxon rank-sum” test to discover significant changes. We therefore infer an age-related change in an enrichment score with alteration in immune cell responses of the lung.
Figure 10 A, B depicts enrichment scores as violin plots for different immune cells. The violin plots shown in panel A refer to age-dependent changes in the enrichment score for gene marker sets of individual cell types. The individual genes are given in supplementary Table S 12 , and in the case of dendric cells and alveolar macrophages consisted of 194 and 158 ones, respectively. We independently confirmed their regulation by interrogating single-cell RNAseq data (Fig. 10 A2) and the enrichment scores are significantly increased in animals aged 52–130 weeks. The data implies altered immune cell activity with age, and the change in enrichment scores in the lung of aged animals is suggestive for an age-related induced expression of gene markers of immune cells. Independent research found similar age-related changes in the expression of immune cell gene markers, and various investigators coined the phrase of “inflammaging” which implies an age-related sterile and chronic upregulation of pro-inflammatory cytokines [ 181 ]. Depicted in Fig. 10 B1 and B2 are the results for the human test and validation set of 107 and 307 lung samples, respectively. Here, the data are less obvious and except for the reduction of enrichment score for the macrophage gene marker set of the test set (Fig. 10 B1) could be ascertained. On the other hand, for the human validation set the marker genes for CD4 + naive, CD4 + effect and CD8 + effect cells increased significantly (Fig. 10 B2).
One of the reasons for the differences between humans and mice is the large variation in the expression of marker genes seen with human lung samples (see the violin plots). Another point relates to the different methods employed, i.e. RNA-seq (Fig. 10 B1 for the test set) and the microarray platform (Fig. 10 B2 for the validation set). Therefore, we could not combine the data. Nonetheless, the results for the macrophage gene markers are in line with research reported by others. With age, the number of alveolar macrophages declined [ 182 ]; the various aspects of immunosenescence especially of macrophages was the subject of a recent review [ 183 ].
Figure 10 C visualizes commonly expressed gene markers for various immune cells of the mouse lung. In total we observed 31 marker genes which increased continuously with age among 89 individual samples, and this included some highly regulated ones such as interleukin 7 receptor alpha chain ( Il7r ) which increased by 4-fold. Il7r is a key player in the maturation of B and T cells, and it was demonstrated that its expression is positively correlated with immune cell infiltration in the tumor microenvironment of lung adenocarcinoma [ 184 ]. Moreover, in the lung of aged mice, we observed > 3-fold induced expression of Ccl5, and this potent inducer of neutrophile migration stimulates alveolar macrophage activation [ 185 ]. Furthermore, we observed approximately 7- and 4-fold increases in the expression of the neutrophils marker genes Cd177 and Lrg1, and observed upregulation of the major histocompatibility class II molecules, i.e. HLA-DRB1 , HLA-DQA1 and HLA-DQB2 . Note we converted the mouse MHC class II to human ones based on the NCBI database and annotations reported in [ 186 ]. These MHC molecules are of critical importance in presenting antigens to the immune system. Although the significance of an induced expression of MHC molecules is far from clear, Jurewicz & Stern suggested that an upregulation of MHC-II-peptide complexes serves to focus the peptide repertoire and its processing by lymphocytes [ 187 ]. In general, the upregulation of MHC II molecules allows for the effective presentation of antigens. Strikingly, the genes coding for the immunoglobulin heavy constant mu ( Ighm ), immunoglobulin heavy constant alpha ( Igha ) and immunoglobulin heavy constant gamma ( Ighg ) were > 20-fold induced in the lung of aged mice. Typically, these are expressed on preB and B cells and contribute to immunoglobulin receptor binding activity. Their significant upregulation emphasizes inflammaging. We also observed a 2.3-fold upregulation of Cd68 in the lung of aged mice, and the coded protein function as a scavenger receptor. CD68 is a specific macrophage marker and apart from its roles in immune response, the CD68 scavenger receptor phagocytoses cell debri and lipids. Furthermore, studies in Cd68 ko mice are suggestive for CD68 to negatively regulate antigen uptake and loading onto MHC molecules [ 188 ].
We observed significant upregulation of four subunits of Cd3 ( Cd3g , Cd3d , Cd3e , Cd3z , corresponding to Cd3-gamma γ, -delta δ, -epsilon ε, -zeta ζ chains) in the lung of aged mice (Fig. 10 D). Importantly, Cd3 and the εγ, εδ, ζζ subunits interact with the TCRαβ dimer to form the TCR-CD3 complex. Activation of the TCR-CD3 complex requires phosphorylation of the immunoreceptor tyrosine-based activation motif (ITAMs) [ 189 ], and this reaction is catalyzed by the lymphocyte-specific protein tyrosine kinase (Lck) which we found 2-fold increased in the lung of aged mice. The activation of the Lck kinase is mediated by protein tyrosine phosphatase receptor type C (Cd45). This phosphatase dephosphorylates the inhibitory C-terminal tail of Lck [ 190 ], and in the present study, we observed its induced expression to be strictly age-related. Lck phosphorylates the Zeta chain of T cell receptor associated protein (Zap70), and the gene coding for this kinase is significantly induced in the lung of aged mice (supplementary Table S 10 ). Upon its activation, Zap70 phosphorylates linker for activation of T cells (Lat) and the lymphocyte cytosolic protein 2 (Slp76). Both genes were about 2-fold induced in the lung of aged mice. The coded proteins function as a scaffold for signaling molecules [ 191 ]. By now, it is well established that different antigens elicit distinct phosphorylations of the various ITAMS of the CD3 receptor [ 192 ], and although not fully understood, ITAM tyrosine phosphorylation diversity is required for optimal TCR signal transduction and subsequent T cell maturation [ 193 ].
Moreover, Cd4 and Cd8 act as coreceptors to amplify TCR signaling and their expression increased mildly but significantly (Fig. 10 C). Likewise, the co-stimulatory molecules Cd28 , Cd80 and Cd5 were increased in expression by about 2-fold in aged mice (Fig. 10 D).
Specifically, the inducible T cell co-stimulator (Icos) is expressed on activated CD4 + and CD8 + T cells [ 194 ], but not on resting T cells, and Icos is rapidly induced after TCR and Cd28 activation [ 195 ]. Interestingly, TNFα treatment induced Icosl expression in the human lung A549 cell line [ 196 ], and we observed induced Icos , its ligand ( Icosl ) and Tnfα expression in the lung of aged mice (supplementary Table S 10 ).
Through complex signaling events and co-stimulatory activity of Cd28 and Icos, activated T cells release several cytokines including Il2, Il4, Il10, Il17 and Tnfα [ 197 ]. While genes coding for these cytokines were mildy but significantly induced in the lung of aged mice, interferon γ remained unchanged. The mild upregulation of cytokine coding genes imply an age-related increase in inflammatory molecules but the contributions of specific immune cells to inflammaging remains uncertain. Based on single cell RNAseq data, we show an age-related induced expression of marker gene sets for interstitial macrophages, NK cell, B-cell, CD4 and CD8-T cells, and although the gene markers for neutrophils did not change (Fig. 10 A2) the data derived from lung tissue studies were suggestive for an age-related increase in their expression (Fig. 10 A1). Indeed, an age-related change in neutrophil trafficking and its pulmonary infiltration has been reported [ 198 , 199 ]. Together, inflammaging can be regarded as a misdirected immune response and cellular senescence is a likely cause of it [ 200 ]. Aging cells are characterized by an accumulation of oxidative stress, metabolic deregulations, DNA damage and telomere shortening and all of these events trigger “alarm” signals which lead to the recruitment of immune cells [ 198 ]. Furthermore, we noted a mild (< 2-fold) but statistically significant upregulation of NLRP3/ASC/Caspase-1 [ 201 ] and therefore observed upregulation of certain components of the NLRP3 inflammasome machinery.
To independently validate the results, we interrogated single cell RNAseq data and the results are summarized in Fig. 10 E. Here we compared animals aged 12 weeks to animals aged 96 weeks and assessed the expression pattern of 61 immune response genes (Fig. 10 E) in dendritic, alveolar and interstitial macrophages, CD4 + and CD8 + T-cells, B-cells, NK, eosinophilic granulocytes as well as AT2 cells.
We confirmed an age-related induced expression of 44 genes and therefore validated 72% of immune response genes in single cell RNAseq data among various cells resident in lung tissue of mice. The genomic data signify upregulation of gene marker sets linked to dendritic, alveolar and interstitial macrophages, B-, CD4 + and CD8 + T and NK cells.
Age-related changes in the expression of pulmonary cell marker genes
As detailed above, we interrogated different public databases to retrieve gene markers of pulmonary cells and searched for the consensus among them (supplementary Table S 11 , S 12 ). We inferred age-related changes in the cell functions by considering differences in the expression of gene marker sets. The number of genes for a given cell type differed, and in the case of alveolar type 1 (AT1) and type 2 (AT2) cells consisted of 78 and 83 genes, respectively. For each cell type, we provide a list of marker genes in supplementary Table S 12 and next to AT cells, we evaluated the enrichment score of gene marker sets for basal, ciliated, club, goblet, capillary, endothelial, fibroblast and myofibroblasts. Furthermore, we validated the expression of gene marker sets in single cell RNAseq data of AT1, AT2, ciliated, club, goblet endothelial and fibroblasts (supplementary Figure S 7 ).
Shown in Fig. 11 A are the various epithelial cells of the lung. Specifically, the airway luminal space of the bronchi is lined by pseudostratified columnar ciliated epithelium, which gradually changes from columnar to simple cuboidal and accounts for more than half of all epithelial cells in the conducting airway [ 202 ]. The luminal surfaces of the bronchus are covered by mucus of the secreting goblet cells and together with ciliated cells function as vital components of the mucociliary clearance. Although the physiological functions of the mucus gel of the bronchus and surfactant layer of alveoli differ, both are essential barriers in the defense against pathogens and airborne toxins. Furthermore, the surfactant in the alveolar space lowers surface tension, thereby preventing atelectasis as summarized in the seminal review of Han and Mallampalli [ 203 ].
Based on an assessment of marker gene sets, we observed for 89 mouse lung samples an age-related decrease in the enrichment scores for AT1 and AT2, endothelial cells, fibroblast and myofibroblast (Fig. 11 B) whereas for ciliated, Club and Goblet cells the enrichment scores increased.
A major difference between AT1 and AT2 cells relates to the surfactant production, i.e. AT2 cells produce surfactant and transdifferentiate into AT1 cells, and this is associated with a change in epithelial cell morphology. Specifically, AT1 cells are flat and highly abundant and cover nearly 95% of the gas exchange surface area, whereas AT2 cells are cuboidal and to some extent possess the ability for self-renewal [ 204 ]. By interrogating single-cell RNAseq data of mice, we confirmed the regulation of AT marker genes (supplementary Figure S 7 ). Furthermore, and unlike the human test set, we observed an age-related reduction in the AT2 and club enrichment score but an increase in enrichment scores for fibroblast, myofibroblast and basal cell when considering the human validation set of 307 individual samples (Fig. 11 C2).
In regards to the regulation of AT2 cell marker genes in the mouse lung, we wish to highlight the 2- and 1.5-fold repressed expression of the ERBB receptor feedback inhibitor 1 ( Errfi1 ) and the suppressor of cytokine signaling 2 ( Socs2 ). Errfi1 is the feedback inhibitor of the EGFR signaling, and its repression supports EGFR dependent cell proliferation [ 205 ]. Conversely, Socs2 function as an antiinflammatory mediator and its repression supports inflammaging [ 206 ]. Furthermore, a recent study demonstrated the importance of cellular senescence of AT2 cells in pulmonary fibrosis [ 207 ]. We observed a significant but mild down-regulation of activated leukocyte cell-adhesion molecule ( Alcam ) in the lung of age mice, and its expression is repressed in experimental models of lung fibrosis and in clinical samples of idiopathic fibrosis (IPF). Moreover, gene silencing of Alcam is associated with increased cell death [ 208 ].
A further example relates to the transcriptional repressor SIN3 transcription regulator family member A (Sin3a) which was 1.5-fold repressed in the lung of aged mice. In the human lung, its repression did not reach statistical significance. Note, through genetic studies and single cell RNA sequencing, Yao and colleagues defined a key role for Sin3a in cellular senescence, and the loss of Sin3a leads to a drastic increase in cell size, a marked reduction in colony-forming efficiency of lung cells and an almost complete loss of cell proliferation capacity [ 207 ].
Additionally, the enrichment score for marker genes of basal cells declined, and these cells are attached to the basement membrane. The basal cell function as progenitor cells in the respiratory epithelium and are able to replace damaged cells following injury. In the lung of aged mice, we noticed predominantly repression of basal cell marker genes of which the sestrins are a remarkable example. These stress-inducible proteins are protective against oxidative stress and airway remodeling, and we observed sestrin 1–3 to be repressed less than 2-fold in the lung of aged mice. The functions and roles of sestrins in regulating human diseases have been the subject of a recent review [ 161 ] which included a description of their role in the support of stem cell homeostasis [ 209 ]. Furthermore, we observed significant upregulation of the stem cell marker leucine rich repeat containing G protein-coupled receptor 5 ( Lgr5 ). Specifically, Lgr5 expression level increased during injury [ 210 ] and its expression is restricted to a subpopulation of basal cells with progenitor/stem cell properties [ 211 ].
Conversely, the marker gene set for ciliated cells increased when 6–26 week old mice were compared to 1–5 week old ones, and based on ciliated single cell RNA-seq data, we confirmed this change in the enrichment score between 12 and 96 week old mice (supplementary Figure S 7 ). Our finding are in agreement with the results of Ilias and colleagues [ 5 ]. However, in the aged human lung, the enrichment score for ciliated cells did not change.
For club cells, we determined an age-related increase in the enrichment score and confirmed this change in single club cell RNA-seq data. Here, we compared 12 week old mice with aged ones (supplementary Figure S 7 ). Note, club cells are thought to be progenitor/stem cell like cells which under condition of stress transdifferentiate into goblet and ciliated cells. In fact, there are several stem cell niches in the lung such as the basal and secretory stem cells in the large airways and the AT2 cells of the alveoli.
Furthermore the enrichment scores for marker gene sets of goblet cells increased in a strict age-related manner (Fig. 11 B), and among the goblet marker genes, we found mucin 16 and 20 to be significantly upregulated which represent a major airway mucin [ 212 ]. In the human lung, however, the enrichment score for gene markers of the goblet cells did not differ. Furthermore, for the human validation set (Fig. 11 C2) but not the test set (Fig. 11 C1) we computed a significant decline of the enrichment score for club cells. It is tempting to speculate that such a decline in the enrichment score is linked to an age-related decline in the ability to clear mucus from the lung [ 212 ].
Likewise we observed a strict age-related decline in the enrichment scores for fibroblasts and myofibroblasts (Fig. 11 B). Specifically, the gene marker set of fibroblasts consisted of 434 genes of which 33 declined in expression with age. Among them are basement membrane coding ones (e.g., Lama4 , Lamb1 , Nid1 ) and the ones coding for ECM degradation ( Mmp2 and Adamts2 , see Fig. 7 G). Strikingly, the transcript expression of tenascin C was highly repressed (> 3-fold) in the lung of aged mice, and this gene codes for a glycoprotein of the ECM (Fig. 7 G) and is of critical importance in wound healing. A further example relates to slit guidance ligand 2 whose expression declined with age. This protein inhibits fibroblast differentiation and fibrosis [ 213 ], and its expression was drastically reduced to 15% when compared to young animals. For its protective role in bleomycin induced fibrosis, we considered the age-dependent repression of slit guidance ligand 2 as detrimental. Conversely, the age-dependent repression of follistatin-like 1 (1.5-fold) and cadherin 11 (1.5-fold) can be regarded as beneficial, given their role in the promotion of fibrosis [ 214 , 215 ]. Notwithstanding, we could not confirm a decline in the enrichment score for fibroblasts based on single cell RNAseq data of 12 week and 96 week old mice (supplementary Figure S 7 ).
Lung surfactant coding genes in the aging lung
The surfactant of the alveolar space defines the pulmonary air–liquid interface, and is composed of phosphatidylcholine (PC ~ 80%), phosphatidylglycerol (~ 10%), minor phospholipids, i.e. phosphatidylserine (PS), cholesterol (~ 10%) and the surfactant proteins (SP-A to SP-D) [ 216 , 217 ]. Its main function is to reduce surface tension thereby preventing atelectasis [ 203 ]. For the human and mouse lung, an alveolar surface of approximately 40–80 m 2 and 80 cm 2 has been determined [ 218 ]. Alveolar type 2 cells produce surfactant and lung surfactant represents an important barrier for airborne pathogens and toxins. Depicted in Fig. 11 D are the various aspects of surfactant biology, i.e. the transport of lamellar body by the ATP binding cassette subfamily A member 3 (ABCA3) transporter to the plasma membrane, the fusion of the lamellar body with the plasma membrane and its secretion into alveolar fluid, its incorporation into the surfactant layer, the recycling of used surfactants, and finally, its degradation by lysosomes and alveolar macrophages.
We already emphasized (see above) the role of senescence of alveolar epithelium in the aging lung and now highlight some key steps in the production of surfactant. First, we considered age-related changes in the regulation of genes coding for the synthesis of phospholipids. Specifically, in the lung of aged mice, we observed about 2-fold repression of the α-isoform of choline kinase (CK) and choline-phosphate cytidylyltransferase (PCYT1A/CCTα) whereas choline phosphotransferase (CHPT1) and choline/ethanolamine phosphotransferase 1 (CEPT1) increased by 2.4 and 2-fold, respectively. These genes code for main biosynthetic pathway of phosphatidylcholine, and the results imply an age-related change in the production and composition of surfactant phospholipids. Other investigators also reported compositional changes of the surfactant with age [ 56 , 219 ]. Unlike mice, the aforementioned genes were not regulated in the human lungs of our study cohort.
Additionally, in the lung of aged mice, we noticed repressed transcript expression of the phospholipid phosphatases 1 ( Plpp1 ), whereas Plpp3 , Plpp2 , lipin 1 ( Lpin1 ) and Lpin2 were about 2-fold upregulated. The proteins function as phosphatide phosphohydrolase (PAP) and catalyze the reaction phosphatidic acid (PA) to diacylglycerol (DAG) [ 220 ]. Conversely, in the aged human lung, transcript expression of phospholipid phosphatase 2 increased by 1.7-fold and the enzyme converts phosphatidic acid (PA) to diacylglycerol.
As shown in Fig. 11 D diacylglycerol kinase (DGK) catalyzes the phosphorylation of diacyl glycerol to phosphatidic acid (PA), and there are 10 isoforms of DGK which are grouped into 5 types. Both DGK and PA are important signaling molecules and function in various physiological pathways including immune response [ 221 ]. In the lung of aged mice, we observed 2-fold repressed expression of the type 2 kinases DGKδ and DGKη. Conversely, in the aged human lung, expression of Dgkd and Dgkq increased by about 2-fold, and these kinases play a role in T cell function and TCR response (see above) [ 222 ].
Phosphatidylserine (PS) is another component of the surfactant, and we observed small but significant increases in transcript expression of phosphatidylserine synthase ( Pss ) -1 and Pss2 in the lung of aged mice (Fig. 11 D). The coded proteins catalyze the synthesis of PS from phosphatidylcholine or phosphatidyl-ethanolamine, and we observed an approx. 1.5-fold upregulation of phosphatidylethanolamine N-methyltransferase ( Pemt ). This enzyme catalyzes the formation of PC via methylation of phosphatidyl-ethanolamine. Although independent research confirmed the expression of this gene in the lung [ 223 ], it is uncertain whether the coded protein is functionally active in airway epithelial cells. Notwithstanding, the enzyme is abundantly expressed and highly active in the liver [ 224 ]. Furthermore, the CDP-choline pathway yields PC, and we observed a small but significant down-regulation of Pcyt1a . The coded protein functions as a CDP-choline:1,2-diacylglycerol cholinephosphotransferase in the production of PC (Fig. 11 D). Noteworthy, ethanolamine kinase (ETNK1) catalyzes the first step in the production of phosphoethanolamine (PE) from ethanolamine, and PE can be methylated by PEMT to support PC production [ 224 ]. Unlike in humans, Etnk1 transcript expression was significantly repressed by 2.2-fold, and this implies repressed PE production in the lung of old mice. However, other key enzymes of this pathway, i.e. Pcyt2 and the choline/ethanolamine phosphotransferase 1 ( Cept1 ) increased by 2-fold. Interestingly, some reports suggest phosphatidylethanolamine to positively regulate autophagy and longevity [ 225 , 226 ].
The significant down-regulation of the Abca3 gene in the lung of aged mice is another important finding. This lipid transporter of alveolar type 2 cells is of critical importance for the intracellular transport of lamellar bodies to the plasma membrane and the subsequent exocytosis of surfactant (Fig. 11 D). Owing to its structural complexity, the Abca3 transporter is recycled or degraded in lysosomes, and there are multiple reports on human disease causing mutations of the Abca3 gene [ 227 , 228 ]. These are linked to various lung diseases, especially respiratory failure in term neonates, childhood interstitial lung disease (chILD), idiopathic pulmonary fibrosis (IPF) and diffuse parenchymal lung disease (DPLD) of adults [ 227 ].
Noteworthy, annexin A7 promotes membrane fusion between lamellar bodies and plasma membranes [ 229 , 230 ], and this supports surfactant exocytosis of AT2 cells. We found 2-fold induced Anxa7 expression in the lung of aged mice, and it has been reported that surfactant phosphatidylcholine secretion can be augmented in Anxa7-deficient AT2 cells through exogenous application of Anxa7 [ 230 ]. Together, the data implies impaired surfactant exocytosis in the lung of aged mice.
ANXA13 is induced in the lungs of aged humans; however, the specific functions of ANXA13 in the lung have not been elucidated as yet.
In the lung of aged mice, we observed a significant 2.4-fold induced expression of phospholipase A2 isoform 5 ( Pla2g5) . This enzyme plays a critical role in surfactant degradation and eicosanoid generation. Its induced expression aggravates lung injury in aged mice due to increased surfactant hydrolysis. However, eicosanoids are also key players in the inflammatory response and in the regulation of vascular tone [ 231 ]. Indeed, Pla2g5 participates in the antigen processing in macrophages and dendritic cells, and in Pla2g5 knockout mice, macrophages are defective for phagocytosis [ 232 ] and antigen processing [ 232 , 233 ]. Moreover, Pla2g5 expression in interstitial macrophages is augmented by IL4 [ 232 ] and in the present study, Il4 expression was significantly induced in the lung of aged mice.
Finally, independent studies on the molecular composition of the alveolar lining fluid evidenced its enrichment with pro-inflammatory cytokines, modification of surfactant proteins and lipids and complement components in the lungs of aged mice and humans [ 234 ].
The effects of tobacco smoke on age-related gene expression changes in the lung
We performed whole genome analysis of 52 control cases ( N = 45 smokers and N = 7 never smokers) with definite information on tobacco use to address the question, whether smoking affected the age-related gene expression changes as disovered in the present investigation. Importantly, none of the age-related gene regulations in the human lung are influenced by tobacco use, and the data are given in supplementary Figure S 8 . We were astonished by the result as we expected tobacco product consumption to influence the aging process. However, we identified highly regulated genes in tobacco product users, and this included the xenobiotic defense gene cytochrome P450 family 1 subfamily A member 1 ( CYP1A1 ) and the aryl hydrocarbon receptor repressor which were 42- and 5-fold induced, respectively in smokers. Likewise, we observed 10-fold induced expression of selectin E which allows adhesion of leucocytes at sites of inflammation [ 235 ]. Moreover, we observed a 20-fold induced expression of LIM homeobox domain 9, i.e. a putative tumor suppressor which was highly responsive to tobacco smoke exposure and determined 3-fold induced expression of epidermal growth factor to stimulate EGFR signaling. Further examples included the 6-fold induced expression of MT1G and this metallothionin inhibits ferroptosis [ 236 ]. Additionally, we noted 6-fold induced expression of SLC4A1 and this solute carrier is abundantly expressed in erythrocytes and function in the transport of carbon dioxide from lung tissue [ 237 ]. Importantly, the concentration of CO2 in mainstream cigarette smoke is about 200-times higher when compared to the atmosphere [ 238 ]. Furthermore, tobacco use caused a > 7-fold increased expression of 5'-aminolevulinate synthase 2 and this erythrocyte-specific mitochondrial enzyme catalyzes the first step in the heme biosynthetic pathway, however decreased > 20-fold in the aged human lung. Given that tobacco smoke contains CO which harms erythrocytes and COHb concentrations may increase from 1 to 10% COHb or above among smokers, we regard the upregulation of 5'-aminolevulinate synthase 2 as an adaptive response to CO-harmed erythrocytes. Together, all these highly induced gene expression changes are logical and are caused by tobacco product use. We also noted a 8-fold induced expression of the G protein-coupled receptor 15. This GPCR may counteract inflammation following exposure to tobacco smoke [ 239 , 240 ]. Finally, we wish to highlight the opposite regulation of collagens COL10A1 , COL11A1 and matrix metalloproteinase MMP11 in control cases with a history of tobacco use. Note, for the aged human lung, we found COL10A1 and MMP11 mildly but significantly upregulated; however, tobacco smoke exposure nearly silenced their expression to about 10% of the non-smoking controls.
Common gene regulations in the aged human and mouse lung
Initially, we performed comparative genomics to search for commonalities between the human test and validation set. Although the test and validation set identified 237 and 1673 DEGs, only 26 genes were regulated in common. Among the common DEGs, 14 code for ECM and the remaining 6 genes code for immune response. We already emphasized the importance of ECM remodeling in the aged lung; however, were surprised to see only a small number of common gene regulations between the two data sets. Our finding underscores the significant variability of genomic data in a cohort of 107 (test set) and 307 (validation set) individuals. Obviously, the genomic data is greatly affected by life style, nutrition, co-morbidities, and tobacco product use. As described above, we compared the genomic data of normal lung tissue of never-smokers with smokers, and although histopathology confirmed the lung tissues to be normal, we observed 20 highly regulated genes following tobacco smoke exposure. However, tobacco use did not influence age-dependent gene regulations as reported herein.
Subsequently, we analyzed the mouse and human pulmonary genomic data and compared DEGs from the lung of aged mice to the aged human lung. Strikingly, only two genes are commonly regulated, i.e. Aebp1 and Col9a2 . However, when DEGs of the human test and validation sets were combined, we obtained 86 DEGs (59 up and 27 down) commonly regulated between humans and mice. Moreover, 11% and 25% of up- and down-regulated DEGs differed with a FC > 2. Supplementary Table 14 compiles the commonly regulated genes, and the majority code for ECM remodeling, senescence, immune response and regulation of airway epithelium.
Examples of commonly upregulated genes between the mouse and human lung include collagens ( COL10A1 , COL15A1 ), MMP ( MMP16 ), glycoproteins (thrombospondin 2) and proteoglycan (aggrecan, proteoglycan 4). For instance, expression of Thy-1 cell surface antigen ( THY1 ) increased with age and THY1 stimulates fibroblast apoptosis and lung injury resolution [ 241 ] while protein tyrosine phosphatase receptor type T (PTPRT) contributes to lung stiffness.
Lastly, we observed opposite regulations of Alas2 , Slc4a1 and orosomucoid 1 between mice and humans. The aforementioned genes were upregulated in the lung of aged mice but down-regulated in the aged human lung. Conversely, calsyntenin 2, a protein so far only described for its function in the morphology of synaptic complexes in mice was repressed in expression in the lung of aged mice but upregulated in the human lung and may possibly related to synaptic changes in sensory nerves of the aged lung. | Discussion
This study aimed to investigate age-related changes in the pulmonary genomes of mice and humans, and the gene set enrichment analysis of DEGs informed on various biological pathways which changed over time. We focused on four major biological processes, namely extracellular matrix remodeling, cellular senescence, immune response and surfactant biology and considered a wide range of data including single cell RNA sequencing of pulmonary cells. Furthermore, we probed for the concordance between the gene and coded protein data and searched for continuous gene expression changes over time.
ECM remodeling in the aged lung
An important finding of our study is the age-dependent change in the expression of ECM coding genes, and for the mouse lung, a complex picture emerged with the regulation of 28 collagens, 10 proteoglycans, 17 matrix metalloproteinase and 62 ECM-related glycoproteins. The majority of continuously upregulated genes coded for metalloproteinases, especially Mmp8 , 9 and 12 (Fig. 7 F). These function in the degradation of macrophage inflammatory protein 1α to reduce lung inflammation (Mmp8) [ 242 ], cytokines and matrix-bound growth factors (Mmp9) [ 243 ] and are of critical importance in the onset and progression of lung emphysema (Mmp12) [ 244 ]. A recent study discovered the critical role of the endolysosomal cation channel mucolipin 3 (Trpml3) in controling Mmp12 reuptake [ 245 ]. Note Trpml3 is predominantly expressed in alveolar macrophages and Trpml3 is a critical regulator of Mmp12 clearance. We observed 2-fold induced expression of Trpml3 and Mmp12 in the aged lung of mice; however, Timp1 , an inhibitor of Mmp12 was only marginally upregulated. Furthermore, in the human lung MMP12 was nearly 2.2-fold repressed while its inhibitor TIMP1 was induced. Given that Mmp12 ko mice do not develop emphysema [ 246 ] we consider the marked repression of Mmp12 as an age-related adaptive response to a lifetime exposure of pathogens and pollutants. Moreover, our study is suggestive for species dependent clearance of MMP12 via TRMPL3 or TIMP1. In fact, TRPML3 is not regulated in the human lung but in mice. Another example relates to the upregulation of Timp3 , and the repressed expression of its targets Mmp2 and Mmp14 (Fig. 7 G) underscores imbalances in the ECM homeostasis of the aging lung.
Additionally, we noticed upregulation of collagen type I and III in the aged human lung, and these are produced by activated fibroblasts which results in a more rigid fiber network and are part of a wound repair process. Interestingly, Mays et al. reported an age-related increase in the concentration and relative proportions of types I and III collagen in the lung of rats [ 86 ], and therefore similar changes occur in the human and rodent lung. Furthermore, we noticed a V-shaped expression pattern of collagen type II expression in neonatal and aged lungs. Importantly, in the aged human lung expression of collagen triple helix repeat containing 1 ( CTHRC1 ) increased significantly, and a recently published atlas of collagen-producing cells of the human lung identified Cthrc1 -expressing fibroblasts as a unique subpopulation in fibrotic lungs [ 95 ]. Based on histopathology, the lung specimens studied by us were classified as healthy. Nonetheless, our findings imply an age-related increase of a subset of fibroblasts which contribute to stiffness and possibly age-related mild fibrotic changes of the lung. Obviously, the continuous exposure to pollutants is a major reason for “inflammaging” and leads to an activation of fibroblasts. The question of how collagen becomes stiff has been the subject of a recent editorial [ 247 ]; HIF pathway activation leads to dysregulated collagen structure–function in the human lung [ 248 ].
We observed repressed expression of basement membrane coding genes, notably laminins ( Lama4 , Lamb1 , Lamc1 ), nidogens ( Nid1 , Nid2 ), tenascin C ( Tnc ) and type IV & VI collagens in the lung of aged mice. These non-collagenous proteins function as linker in the ECM network while laminins form the basal lamina of basement membranes which are part of the mechanical scaffold. Our findings agree with the data reported by others [ 5 , 28 ].
Strikingly, transcript expression of elastin (Eln), i.e. a fiber essential for pulmonary compliance declined by nearly 90%, and independent research confirmed a similar reduced elastin protein content with age [ 28 , 54 ]. Owing to its function, a reduced elastin content results in thinner and fragile alveolar septa [ 55 ] to possibly effect the development of emphysema [ 58 ]. Additionally, there is evidence for Tgfβ to influence elastin transcription via phosphatidylinositol 3-kinase/Akt activity [ 121 ]. In the present study, we observed 2-fold repressed expression of the PI3K subunit α in adult and aged mice. Conceivably, this provides a rationale for the sharp decline in elastin expression between neonatal and adult mice (Fig. 7 H).
We also identified an age-related up to 4.5-fold increased expression of the fibrinogen alpha ( Fga ) and gamma chain ( Fgg ). Note, enhanced fibrinogen deposition changes the extracellular matrix to support cell migration during matrix remodeling and tissue repair [ 249 ].
Lastly, we highlight the regulation of proteoglycans. These are composed of polysaccharide chains attached to core proteins [ 250 ] and when combined with collagens and elastin, the collagen-elastin network gets stabilized [ 251 ]. Proteoglycans participate in inflammatory and angiogenesis processes, and we identified 10 genes as significantly regulated in the lungs of aged mice [ 252 – 254 ]. Examples are fibromodulin and versican, and their induced expression negatively influences the elastic recoil [ 255 ]. Moreover, versican binds CC-chemokines [ 253 ] and promotes leukocyte migration [ 256 ]. Conversely, an age-related induced expression of decorin protects against fribrotic scares by inhibiting TGFβ cytokine signaling [ 69 , 70 ].
Cellular- and immunosenescence
Cellular senescence is characterized by a stable cell cycle arrest [ 124 ] and an activated senescence-associated secretory phenotype (SASP) [ 130 ]. The SASP comprises a range of inflammatory molecules and causes an aged related decline in the anatomical and physiological functions of the lung [ 257 ]. Moreover, immunosenescence is a key mediator for susceptibility to infection [ 258 ]. Altogether, we considered 71 SASP coding genes of which 53 were significantly regulated. For instance, Ccl8, Ccl13, Ccl20 function as chemoattractants and recruit monocytes to sites of injury. Similarly, regulation of matrix metalloproteinases supports an extravasation and migration of inflammatory cells [ 81 , 146 ]. It is of considerable importance that all of the senescence associated gene regulations were independently confirmed by other studies with some genes such as Cxcl13 were > 9-fold induced in the lungs of aged mice [ 5 ].
Senescence results in pathophysiological changes of the microenvironment, and there is growing evidence for a central role of p53 in directing cellular senescence programs [ 259 ]. Indeed, a lifetime exposure to airborne particulate matter and pollutants represents a source of chronic stress, and ROS induced cellular stress stimulates p53 activity and the expression of antioxidant defense genes. Through a complex interplay of factors, p53 either supports repair or triggers programmed cell death but also stimulates cellular senescence. We observed an age-related upregulation of antioxidant defensive genes as exemplified by the 2-fold induced expression of nuclear factor erythroid 2 related factor 2, i.e. a regulator of redox homeostasis; however, repressed NADPH oxidase 4. Similarly, expression of p53 and Sirt1 which deacetylates p53, was significantly induced, and all this suggests age-related responses to oxidative stress. Intriguingly, the p53 inducible nuclear protein ( Trp53inp1 ) was significantly repressed, and this protein acts as a positive regulator of autophagy [ 157 , 158 ]. Furthermore, we observed autophagy-related ubiquitin-like modifier LC3B significantly repressed in the lungs of aged mice and therefore infer an inadequate autophagy in aged lungs which has also been the subject of a recent review [ 159 ].
Importantly, the telomere size is a critical factor in cellular senescence and genomic stability. Its shortening will lead to replicative senescence, although this paradigm has been challenged [ 170 ]. Telomere dysfunction leads to DNA damage response (DDR), and cellular aging [ 171 ]. We observed upregulation of telomerase reverse transcriptase, i.e. a rate-limiting subunit of telomerase in the aged lung of mice, and TERT stabilizes telomeric DNA. Therefore, we regard its upregulation as an adaptive response to counteract replicative senescence.
Lastly, immunosenescence is defined by an age-related change in the immune response, and based on immune cell gene marker sets, we detected significant changes of the genes coding for an innate and adaptive immune system. We observed an age-related increase in the expression of gene marker in the lung of aged mice which hallmark alveolar and interstitial macrophages, B-cell and dendritic cells, neutrophils and cytotoxic CD4 + and CD8 + T- cells (Fig. 10 A1). With the exception of alveolar macrophages we confirmed the results by single cell RNAseq (Fig. 10 A2). In stark contrast, and with the exception of the gene markers of alveolar macrophages of the test set (Fig. 10 B1), none of the cell marker gene sets significantly changed in the aged human lung. However, with the larger human lung validation set of 307 individuals, gene markers for CD4 + and CD8 + cells were significanlty upregulated, and the results are comparable to the aging lung of mice.
Together, we observed species specific differences in immunosenescence, and Fig. 10 C highlights the regulation of individual genes among different immune cells of the mouse. Based on single cell RNAseq similar age-related changes in the expression of immune cell marker genes have been reported [ 5 ]. Overall, we observed 31 immune genes whose expression increased continuously with age, and these code for immune cells recruitment, their activation, migration, adhesion and proinflammatory cytokine secretion. For instance, we observed about 4-fold induced expression of Il7r and Ccl5 , and the coded proteins function in B and T cell maturation [ 184 ], neutrophil migration and alveolar macrophage activation [ 185 ]. Strikingly, the genes coding for the immunoglobulin heavy constant mu, Igha and Ighg were > 20-fold induced in the lung of aged mice, and we observed upregulation of 20 genes coding for the TCR-CD3 complex. Clearly, this emphasizes its role in inflammaging.
Age-related changes in surfactant coding genes
The surfactant defines the pulmonary air–liquid interface and is composed of phospholipids, cholesterol and surfactant proteins. AT2 cells are of critical importance for the production of surfactant, and its main function is to reduce surface tension [ 203 ]. Moreover, it is an important barrier for airborne pathogens and toxins.
Among age-related changes, we noted repressed expression of the α-isoform of choline kinase (CK) and choline-phosphate cytidylyltransferase (PCYT1A/CCTα) whereas choline phosphotransferase (CHPT1) was upregulated (Fig. 11 D). These code for the main biosynthetic pathway of phosphatidylcholine, and the data imply changes in the production and composition of surfactant phospholipids. Likewise, independent studies reported compositional changes of the surfactant with age [ 56 , 219 ].
We considered the various aspects of surfactant biology and observed repressed expression of Abca3 but upregulation of annexin VII. These function in the transport of lamellar body to the plasma membrane, support the fusion with the plasma membrane and surfactant secretion into alveolar fluid. Next to alveolar macrophages, AT1 cells are capable to recycle surfactant and degradate lipids in lysosomes. We observed induced expression of phospholipase A2 isoform 5 which directly hydrolyzes surfactant phospholipids and stimulates the production of inflammatory lipids. Additionally, the molecular composition of the alveolar lining fluid was the subject of independent reports, and there is evidence for its enrichment with pro-inflammatory cytokines, modification of surfactant proteins and lipids and complement components in the lung of aged mice and humans to possibly modify immune responses [ 234 ].
Note the luminal surfaces of the bronchus is covered by mucus which is synthesized by goblet cells and together with ciliated cells function as vital components of the mucociliary clearance. Although the physiological functions of the mucus gel of the bronchus differs from the surfactant layer of alveoli, both are essential barriers in the defense against pathogens and airborne toxins. In the present study, we observed an age-related increased expression of mucin 16 and 20 which are major airway mucin [ 212 ].
Variations in the expression of cellular marker genes with age
As shown in Fig. 11 B, the enrichment score for gene marker sets of various lung cells changed with age. Specifically, we observed age-related increases in the enrichment score for several lung cells. Considering the nature of the enrichment score, its overrepresentation in the lung of aged mice imply increased expression of gene marker sets with age. Importantly, each cell type is characterized by a large set of genes, and in the case of AT1 and AT2 cells, we considered 78 and 80 unique genes, respectively (supplementary Table S 12 ). We observed a decreased enrichment score of AT1 and AT2 cells, and the findings underscore impaired regenerative capacity with age even though some key regulators of lung homeostasis were unchanged, particularly the transcriptional co-activators YAP or TAZ. Although the gene expression of the Tead1 transcription factor was significantly repressed (2-fold), the expression of other TEAD coding genes was unchanged. Given the establish role of TEAD in the control of AT1 specific gene expression [ 260 ], we performed in silico genomic footprinting, and found AT1 and AT2 marker genes to be nearly 3-fold enriched for TEAD binding sites ( p < 0.05). Importantly, independent research confirmed AT2 cells not to decline in number with age, yet its differentiation into AT1 cells and the overall density might be reduced [ 261 ].
On the other hand, the enrichment scores for ciliated airway epithelium and club cells increased, and we confirmed the results by single cell RNA-seq. Here, we considered 143 and 31 unique genes, respectively to determine the enrichment score (supplementary Table S 12 ). Importantly, club cells, especially of the secretoglobin family 1A member 1 ( Scgb1a1 ) + lineage, are thought to be progenitor/stem cell like cells, which under conditions of stress, transdifferentiate into goblet, ciliated and alveolar epithelial cells as well as basal cells [ 22 ]. The Scgb1a1 protein (formerly named club secretory protein 16) is secreted by club cells and confer protection against oxidative stress and inflammation [ 262 , 263 ]. We observed an induced SCGB1A1 expression in the aged human lung and consider this upregulation as an adaptive response to redox stress and inflammaging. Redox stress is common to tobacco product use. Therefore, we and others compared the expression of SCGB1A1 in smokers and non-smokers. There is clear evidence for tobacco use to cause its repression [ 264 ], and in a cross-sectional study, decreased SCGB1A1 serum levels were associated with tobacco smoke induced chronic obstructive pulmonary disease [ 265 ] while smoking cessation for 3, 6 and 9 months restored SCGB1A1 level in BAL fluid [ 266 ].
Given that the GATA binding protein 6 declined with age, we assumed an Wnt dependent airway modulation in the lung of aged mice. Specifically, Zhang and colleagues demonstrated the fundamental role of Gata6-regulated Wnt signaling molecules in epithelial stem cell development and lung regeneration [ 267 ], and its repression likely contributed to impaired lung epithelial cell regeneration with age.
An important feature of basal cells is their ability to replace damaged cells. However, in the lung of aged mice, several basal cell marker genes were down-regulated, and this agrees with reports by others as summarized in a recent review [ 200 ]. For instance, we found sestrin 1–3 to be repressed in the lung of aged mice, and sestrins are stress-inducible proteins and protect against oxidative stress, airway remodeling and function in stem cell homeostasis [ 209 ]. Notwithstanding, basal cell self-renewal also depends on the ROS-Nrf2-Notch1 axis [ 268 , 269 ], and in the lung of aged mice Nrf2 is upregulated; however, Notch1 is repressed. Our findings of an opposite regulation of cell marker genes imply an age-related deregulation of basal cell homeostasis, and previous investigations demonstrated a decline in basal cell number an function in the lungs of aged mice [ 270 , 271 ].
Study limitations
There are important caveats to our study we would like to highlight.
First, we report findings with and without animals of the age of one week, and although significant changes in the gene expression pattern are obvious, these animals are sexually immature and such young mice may not be appropriate in an aging study. On the other hand the lack of consensus on an aging biology paradigm among leaders of the field, and the marked disagreement on whether or not we know what biological aging means, show that many open questions remain [ 272 ]. In fact, some recent studies argue that aging starts very early in life [ 273 ].
Second, for the human lung only 26 genes were regulated in common between the test and validation set, and the variability of individual data prevented an identification of a larger set of significantly regulated genes eventhough we considered 414 individual human lung genomic data sets. Correspondingly, we combined significantly regulated genes of both human data sets to compare the results to mice. Third, the healthy lung tissue stems from patients undergoing surgery primarily for cancer indications, and although pathology confirmed the tissue to be normal (with the exception of slight to moderate emphysema) the tumor adjacent biopsies are potentially confounded by the underlying disease. Fourth, the platforms used differed between the human test and validation set (microarray versus RNAseq). Therefore, we compared the DEGs based on fold changes rather than signal intensities. Fifth, changes in the expression of marker genes are likely measures of endogenous activity and therefore may not reflect cellular abundance.
Concluding remarks
Depicted in Fig. 12 is the complex interplay of ECM remodeling in healthy but aged individuals and its links to inflammaging, senescence and surfactant lipids. Obviously lifetime exposures to airborne pollutants, fine particles and pathogens has its toll on the mechanical properties of the lung. Yet, the cells themselves do no change mechanical properties [ 274 ]. Rather an age-related increase in the interstitial ECM matrix and its remodeling can be regarded as a primary cause for stiffness of the lung. In support of this notion we observed an increase in the enrichment score of fibroblasts and myofibroblasts in the human lung (Fig. 11 C) but not in the aged mouse lung and these cells produce the ECM matrix. Our findings of an age-related increase in ECM deposition agrees with independent research reports [ 28 – 30 ], and the reviews listed herein are a valuable resource of ongoing and published research [ 32 , 275 – 277 ].
Together there are 117 and 68 ECM mouse and human regulated genes of which two-thirds have been confirmed by independent research (supplementary Table S 10 ). Likewise, we identified 73 and 31 significantly regulated genes coding for senescence in mice and humans, of which 47 and 17 have been confirmed by others (supplementary Table S 10 ). Furthermore, we identified 41 immune cell marker genes in mice whose age-dependent regulations have been confirmed by others [ 5 , 26 , 27 ].
Finally, our study revealed a number of, as yet, unknown age-related gene regulations. For instance, we observed α‐2 glycoprotein and galectin 3 as significantly upregulated in mouse lung tissue and the coded proteins instruct ECM remodeling. A further example relates to the regulation of adipocyte enhancer-binding protein 1, and we highlighted its role in wound healing, ECM remodeling and myofibroblast differentiation. We observed regulation of collagen triple helix repeat containing 1, i.e. a gene marker specific of a fibroblast subpopulation. Noteworthy is also an age-dependent regulation of grainyhead like transcription factor 2 and paired box 8. These transcription factors activate the telomerase reverse transcriptase promoter and therefore revert the shortening of telomeres.
Additionally, we observed significant regulations of 20 genes coding for the TCR-CD3 complex. However, this complex was unchanged in the aged human lung. Notwithstanding, we identified 12 genes coding for immune response in the aged human lung and confirmed 7 in an independent RNAseq data set which was recently published [ 5 ]. For example, we observed an age-related increase in the expression of complement factor H. Owing to its function, we view its regulation as an adaptive response to inflammaging. A further example relates to an upregulation of endothelial cell‐specific protein plasmalemma vesicle‐associated protein ( PLVAP ), which allows for basal cell permeability, leukocyte migration and angiogenesis. This finding is specific to the aged human lung. Other inflammaging gene regulation included phosphodiesterase type 2A and thrombospondin 2, and their age-related upregulation has not been reported sofar.
Noteworthy is also the down-regulation of Abca3 , which is a major surfactant transporter in AT2 cells. Furthermore, the predominant repression of genes coding for surfactant provides a clue for age-dependent changes in surfactant biology.
We also identified 77 and 13 genes, respectively in the mouse lung which were either continuously increased or repressed in expression with age, and these code for various biological processes such as ECM, inflammation and cell adhesion while 42 genes displayed a V-shaped expression pattern, i.e. high expression in neonates, down-regulated in young adult mice and upregulated in aged mice, and most are coding for ECM (Fig. 7 ). | Background
The aging lung is a complex process and influenced by various stressors, especially airborne pathogens and xenobiotics. Additionally, a lifetime exposure to antigens results in structural and functional changes of the lung; yet an understanding of the cell type specific responses remains elusive. To gain insight into age-related changes in lung function and inflammaging, we evaluated 89 mouse and 414 individual human lung genomic data sets with a focus on genes mechanistically linked to extracellular matrix (ECM), cellular senescence, immune response and pulmonary surfactant, and we interrogated single cell RNAseq data to fingerprint cell type specific changes.
Results
We identified 117 and 68 mouse and human genes linked to ECM remodeling which accounted for 46% and 27%, respectively of all ECM coding genes. Furthermore, we identified 73 and 31 mouse and human genes linked to cellular senescence, and the majority code for the senescence associated secretory phenotype. These cytokines, chemokines and growth factors are primarily secreted by macrophages and fibroblasts. Single-cell RNAseq data confirmed age-related induced expression of marker genes of macrophages, neutrophil, eosinophil, dendritic, NK-, CD4 + , CD8 + -T and B cells in the lung of aged mice. This included the highly significant regulation of 20 genes coding for the CD3-T-cell receptor complex. Conversely, for the human lung we primarily observed macrophage and CD4 + and CD8 + marker genes as changed with age. Additionally, we noted an age-related induced expression of marker genes for mouse basal, ciliated, club and goblet cells, while for the human lung, fibroblasts and myofibroblasts marker genes increased with age. Therefore, we infer a change in cellular activity of these cell types with age. Furthermore, we identified predominantly repressed expression of surfactant coding genes, especially the surfactant transporter Abca3, thus highlighting remodeling of surfactant lipids with implications for the production of inflammatory lipids and immune response.
Conclusion
We report the genomic landscape of the aging lung and provide a rationale for its growing stiffness and age-related inflammation. By comparing the mouse and human pulmonary genome, we identified important differences between the two species and highlight the complex interplay of inflammaging, senescence and the link to ECM remodeling in healthy but aged individuals.
Graphical Abstract
Supplementary Information
The online version contains supplementary material available at 10.1186/s12979-023-00373-5.
Keywords
Open Access funding enabled and organized by Projekt DEAL. | Supplementary Information
| Abbreviations
ATP binding cassette subfamily A member 3
ADAM metallopeptidase with thrombospondin type 1 motif 1
Adipocyte enhancer-binding protein 1
5'-Aminolevulinate synthase 2)
Activated leukocyte cell-adhesion molecule
Annexin
Type 1 pneumocytes
Type 2 pneumocytes
Benjamini-Hochberg
Coiled-coil domain containing 80
C-C motif chemokine ligand
Eotaxin (C–C motif chemokine ligand 11
Cellular communication network factor 2
C-C motif chemokine receptor 2
CD177 molecule
CD3 delta subunit of T-cell receptor complex
CD3 epsilon subunit of T-cell receptor complex
CD3 gamma subunit of T-cell receptor complex
CD3 zeta subunit of T-cell receptor complex
Protein tyrosine phosphatase receptor type C
Cyclin dependent kinase
Choline/ethanolamine phosphotransferase 1
Childhood interstitial lung disease
Choline kinase alpha
Choline phosphotransferase
Calsyntenin 2
Chronic obstructive pulmonary disease
The colony stimulating factor
Collagen triple helix repeat containing 1
C-X-C motif chemokine ligand
C-X-C motif chemokine ligand 13
Cytochrome P450 family 1 subfamily A member 1
DNA damage response
Differentially expressed genes
Diacylglycerol kinase
Diffuse parenchymal lung disease
Extracellular matrix
Elastin
ERBB receptor feedback inhibitor 1
Enrichment scores
Ethanolamine kinase
Ecotropic viral integration site 2A
Fibulin 2
Fold change
Fibrinogen alpha chain
Fibrinogen beta chain
Fibrinogen gamma chain
Forkhead box O3
GATA binding protein 6
GC robust multi-array average
Gene Expression Omnibus
Gene Ontology
G protein-coupled receptor
Grainyhead like transcription factor 2
Gene set enrichment analysis
Gene Set Knowledgebase
Gene set variation analysis
Nucleosome H4 clustered histone 17
Heparan sulfate
HtrA serine peptidase 1
Inducible T cell co-stimulator
Inducible T cell co-stimulator ligand
Insulin like growth factor
Insulin like growth factor 1 receptor
Insulin-like growth factor binding proteins
Immunoglobulin heavy constant alpha
Immunoglobulin heavy constant gamma
Immunoglobulin heavy constant mu
Interleukin 7 receptor alpha chain
Interstitial lung disease
Idiopathic pulmonary fibrosis
Interquartile range
Immunoreceptor tyrosine-based activation motif
(Lys-Asp-Glu-Leu (KDEL) endoplasmic reticulum protein retention receptor 3
KLF transcription factor
Laminin subunit alpha 2
Laminin subunit alpha 4
Laminin subunit beta 1
Laminin subunit gamma 1
Linker for activation of T cells
Lymphocyte-specific protein tyrosine kinase
Galectin 1
Leucine rich repeat containing G protein-coupled receptor 5
Linear models for microarray data
Lipin
Leucine‐rich α‐2 glycoprotein
Macrophage
Microfibril associated proteins
Matrix metalloproteinase
Membrane-spanning 4-domains, subfamily A, member 6B
Metallothionein 1G
Nicotinamide phosphoribosyltransferase
Normalized enrichment score
Nuclear factor interleukin 3 regulated
Nidogen
NLR family pyrin domain containing 3
Nicotinamide nucleotide transhydrogenase
Nuclear factor erythroid 2 related factor 2
Orosomucoid 1
Odd-skipped related transcription factor 1
Cyclin dependent kinase inhibitor 2A
Cyclin dependent kinase inhibitor 1A
Phosphatidic acid
Paired box 8
Phosphatidylcholine
Principal component analysis
Choline-phosphate cytidylyltransferase
Platelet-derived growth factor receptor-like gene
Phosphoethanolamine
Phosphatidylethanolamine N-methyltransferase
Peroxisome proliferator-activated receptor γ coactivator-1α
Phospholipase A2 isoform 5
Phospholipid phosphatases
Plasmalemma vesicle‐associated protein
Phosphatidylserine
Phosphatidylserine synthase
Protein tyrosine phosphatase receptor type T
Robust multi-array average
Reactive oxygen species
S100 calcium binding protein A4
Senescence-associated secretory phenotyp
Secretoglobin family 1A member 1
SIN3 transcription regulator family member A
Solute carrier family 4 member 1
Lymphocyte cytosolic protein 2
Smooth muscle actin
Suppressor of cytokine signaling 2
Superoxide dismutase 2
Single-sample gene set enrichment analysis
ST8 alpha-N-acetyl-neuraminide alpha-2,8-sialyltransferase 6
Sulfatase 2
The Cancer Genome Atlas
Telomeric repeat binding factor
Telomerase reverse transcriptase
Thy-1 cell surface antigen
TIMP metallopeptidase inhibitor
Tenascin C
The p53 inducible nuclear protein 1
Endolysosomal cation channel mucolipin 3
Wnt family member 7B
Zeta chain of T cell receptor associated protein
Acknowledgements
Not applicable.
Authorship agreement
All authors have read the journal's authorship agreement.
Authors’ contributions
J.B. designed the study and supervised the research. M.H. performed the research, analyzed the data, prepared the Figures and Tables and contributed to the writing. J.B. wrote the final manuscript and both authors approved the paper.
Funding
Open Access funding enabled and organized by Projekt DEAL. We gratefully acknowledge the financial support of the Chinese Scholarship Council (CSC) to M.H.
Availability of data and materials
The data supporting the findings of this study are available from GEO database (GSE38594, GSE66721, GSE55162, GSE34378, GSE38754, GSE23106, GSE25640, GSE18341, GSE15999, GSE14525, GSE11662, GSE10246, GSE9954, GSE6591, GSE3100, GSE124872, GSE1643, GSE71181) and TCGA database (supplementary Table S 1 and S 2 ).
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:36:45 | Immun Ageing. 2023 Nov 6; 20:58 | oa_package/48/6a/PMC10626779.tar.gz |
|
PMC10632145 | 37914938 | Methods
Worm maintenance
C. elegans were stored in the dark, and only minimal light was used when transferring worms or mounting worms for experiments. Strains generated in this study (Extended Data Fig. 1a ) have been deposited in the Caenorhabditis Genetics Center (CGC), University of Minnesota, for public distribution. Hermaphrodites were used in this study.
Transgenics
We generated a transgenic worm for interrogating signal propagation, TWISP (AML462), which has been described in more detail previously 23 . This strain expresses the calcium indicator GCaMP6s in the nucleus of each neuron; a purple-light-sensitive optogenetic protein system (GUR-3 and PRDX-2) in each neuron; and multiple fluorophores of various colours from the NeuroPAL 27 system, also in the nucleus of neurons. We also used a QF-hGR drug-inducible gene-expression strategy to turn on the gene expression of optogenetic actuators only later in development. To create this strain, we first generated an intermediate strain, AML456, by injecting a plasmid mix (75 ng μl −1 pAS3-5xQUAS::Δ pes-10P::AI::gur-3G::unc-54 + 75 ng μl −1 pAS3-5xQUAS::Δ pes-10P::AI::prdx-2G::unc-54 + 35 ng μl −1 pAS-3-rab-3P::AI::QF+GR::unc-54 + 100 ng μl −1 unc-122::GFP) into CZ20310 worms, followed by UV integration and six outcrosses 56 , 57 . The intermediate strain, AML456, was then crossed into the pan-neuronal GCaMP6s calcium-imaging strain, with NeuroPAL, AML320 (refs. 23 , 27 , 58 ).
Animals exhibited decreased average locomotion compared to the WT (mean speeds of 0.03 mm s −1 off drug and 0.02 mm s −1 on drug compared to the mean of 0.15 mm s −1 in WT animals 23 ), as expected for NeuroPAL GCaMP6s strains, which are also reported to be overall less active (around 0.09 mm s −1 during only forward locomotion) 27 .
An unc-31 -mutant background with defects in the dense-core-vesicle-release pathway was used to diminish wireless signalling 53 . We created an unc-31 -knockout version of our functional connectivity strain by performing CRISPR–Cas9-mediated genome editing on AML462 using a single-strand oligodeoxynucleotide (ssODN)-based homology-dependent repair strategy 59 . This approach resulted in strain AML508 (unc-31 (wtf502) IV; otIs669 (NeuroPAL) V 14x; wtfIs145 (30 ng μl −1 pBX + 30 ng μl −1 rab-3::his-24::GCaMP6s::unc-54 ); wtfIs348 (75 ng μl −1 pAS3-5xQUAS::Δ pes-10P::AI::gur-3G::unc-54 + 75 ng μl −1 pAS3-5xQUAS::Δ pes-10P::AI::prdx-2G::unc-54 + 35 ng μl −1 pAS-3-rab-3P::QF+GR::unc-54 + 100 ng μl −1 unc-122::GFP )).
CRISPR–Cas-9 editing was carried out as follows. Protospacer adjacent motif (PAM) sites (denoted in upper case) were selected in the first intron (gagcuucgcaauguugacucCGG) and the last intron (augguacauuggguccguggCGG) of the unc-31 gene ( ZK897.1a.1 ) to delete 12,476 out of 13,169 bp (including the 5′ and 3′ untranslated regions) and 18 out of 20 exons from the genomic locus, while adding 6 bp (GGTACC) for the Kpn-I restriction site (Extended Data Fig. 1b ). Alt-R S.p. Cas9 Nuclease V3, Alt-R-single guide RNA (sgRNA) and Alt-R homology-directed repair (HDR)-ODN were used (IDT). We introduced the Kpn-I restriction site, denoted in upper case (gacccagcgaagcaaggatattgaaaacataagtacccttgttgttgtgtGGTACCccacggacccaatgtaccatattttacgagaaatttataatgttcagg) into our repair oligonucleotide to screen and confirm the deletion by PCR followed by restriction digestion. sgRNA and HDR ssODNs were also synthesized for the dpy-10 gene as a reporter, as described previously 59 . An injection mix was prepared by sequentially adding Alt-R S.p. Cas9 Nuclease V3 (1 μl of 10 μg μl −1 ), 0.25 μl of 1 M KCL, 0.375 μl of 200 mM HEPES (pH 7.4), sgRNAs for unc-31 (1 μl each for both sites) and 0.75 μl for dpy-10 from a stock of 100 μM, ssODNs (1 μl for unc-31 and 0.5 μl for dpy-10 from a stock of 25 μM) and nuclease-free water to a final volume of 10 μl in a PCR tube, kept on ice. The injection mix was then incubated at 37 °C for 15 min before it was injected into the germline of AML462 worms. Progenies from plates showing roller or dumpy phenotypes in the F 1 generation after injection were individually propagated and screened by PCR and Kpn-I digestion to confirm deletion. Single-worm PCR was carried out using GXL-PRIME STAR taq-Polymerase (Takara Bio) and the Kpn-1-HF restriction enzyme (NEB). Worms without a roller or dumpy phenotype and homozygous for deletion were confirmed by Sanger sequencing fragment analysis.
To cross-validate GUR-3/PRDX-2-evoked behaviour responses, we generated the transgenic strain AML546 by injecting a plasmid mix (40 ng μl −1 pAS3-rig-3P::AI::gur-3G::SL2::tagRFP::unc-54 + 40 ng μl −1 pAS3-rig-3P::AI::prdx-2G::SL2::tagBFP::unc-54) into N2 worms to generate a transient transgenic line expressing GUR-3/PRDX-2 in AVA neurons.
Cross-validation of GUR-3/PRDX-2-evoked behaviour
Optogenetic activation of AVA neurons using traditional channelrhodopsins (for example, Chrimson) leads to reversals 45 , 60 . We used worms expressing GUR-3/PRDX-2 in AVA neurons (AML564) to show that GUR-3/PRDX-2 elicits a similar behavioural response. We illuminated freely moving worms with blue light from an LED (peaked at 480 nm, 2.3 mW mm − 2 ) for 45 s. We compared the number of onsets of reversals in that period of time with a control in which only dim white light was present, as well as with the results of the same assay performed on N2 worms. Animals with GUR-3/PRDX-2 in AVA ( n = 11 animals) exhibited more blue-light-evoked reversals per minute than did WT animals ( n = 8 animals) (Extended Data Fig. 2h ).
Dexamethasone treatment
To increase the expression of optogenetic proteins while avoiding arrested development, longer generation time and lethality, a drug-inducible gene-expression strategy was used. Dexamethasone (dex) activates QF-hGR to temporally control the expression of downstream targets 61 , in this case the optogenetic proteins in the functional connectivity imaging strains AML462 and AML508. Dex-NGM plates were prepared by adding 200 μM of dex in dimethyl sulfoxide (DMSO) just before pouring the plate. For dex treatment, L2/L3 worms were transferred to overnight-seeded dex-NGM plates and further grown until worms were ready for imaging. More details of the dex treatment are provided below.
We prepared stock solution of 100 mM dex by dissolving 1 g dexamethasone (D1756, Sigma-Aldrich) in 25.5 ml DMSO (D8418, Sigma-Aldrich). Stocks were then filter-sterilized, aliquoted, wrapped in foil to prevent light and stored at −80 °C until needed. The 200-μM dex-NGM plates were made by adding 2 ml of 100 mM dex stock in 1 l NGM-agar medium, while stirring, 5 min before pouring the plate. Dex plates were stored at 4 °C for up to a month until needed.
Preparation of worms for imaging
Worms were individually mounted on 10% agarose pads prepared with M9 buffer and immobilized using 2 μl of 100-nm polystyrene beads solution and 2 μl of levamisole (500 μM stock). This concentration of levamisole, after dilution in the polystyrene bead solution and the agarose pad water, largely immobilized the worm while still allowing it to slightly move, especially before placing the coverslip. Pharyngeal pumping was observed during imaging.
Overview of the imaging strategy
We combined whole-brain calcium imaging through spinning disk single-photon confocal microscopy 62 , 63 with two-photon 64 targeted optogenetic stimulation 65 , each with their own remote focusing system, to measure and manipulate neural activity in an immobilized animal (Fig. 1a ). We performed calcium imaging, with excitation light at a wavelength and intensity that does not elicit photoactivation of GUR-3/PRDX-2 (ref. 66 ) (Extended Data Fig. 2b ). We also used genetically encoded fluorophores from NeuroPAL expressed in each neuron 27 to identify neurons consistently across animals (Fig. 1c ).
Multi-channel imaging and neural identification
Volumetric, multi-channel imaging was performed to capture images of the following fluorophores in the NeuroPAL transgene: mtagBFP2, CyOFP1.5, tagRFP-T and mNeptune2.5 (ref. 27 ). Light downstream of the same spinning disk unit used for calcium imaging travelled on an alternative light path through channel-specific filters mounted on a mechanical filter wheel, while mechanical shutters alternated illumination with the respective lasers, similar to a previously described method 58 . Channels were as follows: mtagBFP2 was imaged using a 405-nm laser and a Semrock FF01-440/40 emission filter; CyOFP1.5 was imaged using a 505-nm laser and a Semrock 609/54 emission filter; tagRFP-T was imaged using a 561-nm laser and a Semrock 609/54-nm emission filter; and mNeptune2.5 was imaged using a 561-nm laser and a Semrock 732/68-nm emission filter.
After the functional connectivity recording was complete, neuron identities were manually assigned by comparing each neuron’s colour, position and size to a known atlas. Some neurons are particularly hard to identify in NeuroPAL and are therefore absent or less frequently identified in our recordings. Some neurons have dim tagRFP-T expression, which makes it difficult for the neuron segmentation algorithm to find them and, therefore, to extract their calcium activity. These neurons include, for example, AVB, ADF and RID. RID’s distinctive position and its expression of CyOFP allowed us nevertheless to manually target it optogenetically. Neurons in the ventral ganglion are hard to identify because it appears as very crowded when viewed in the most common orientation that worms assume when mounted on a microscope slide. Neurons in the ventral ganglion are therefore sometimes difficult to distinguish from one another, especially for dimmer neurons such as the SIA, SIB and RMF neurons. In our strain, the neurons AWCon and AWCoff were difficult to tell apart on the basis of colour information.
Volumetric image acquisition
Neural activity was recorded at whole-brain scale and cellular resolution through continuous acquisition of volumetric images in the red and green channels with a spinning disk confocal unit and using LabView software ( https://github.com/leiferlab/pump-probe-acquisition/tree/pp ), similarly to a previous study 67 , with a few upgrades. The imaging focal plane was scanned through the brain of the worm remotely using an electrically tunable lens (Optotune EL-16-40-TC) instead of moving the objective. The use of remote focusing allowed us to decouple the z -position of the imaging focal plane and that of the optogenetics two-photon spot (described below).
Images were acquired by an sCMOS camera, and each acquired image frame was associated to the focal length of the tunable lens ( z -position in the sample) at which it was acquired. To ensure the correct association between frames and z -position, we recorded the analogue signal describing the focal length of the tunable lens at time points synchronous with a trigger pulse output by the camera. By counting the camera triggers from the start of the recording, the z -positions could be associated to the correct frame, bypassing unknown operating-system-mediated latencies between the image stream from the camera and the acquisition of analogue signals.
In addition, real-time ‘pseudo’-segmentation of the neurons (described below) required the ability to separate frames into corresponding volumetric images in real time. Because the z -position was acquired at a low sample rate, splitting of volumes on the basis of finite differences between successive z -positions could lead to errors in assignment at the edge of the z -scan. An analogue OP-AMP-based differentiator was used to independently detect the direction of the z -scan in hardware.
Calcium imaging
Calcium imaging was performed in a single-photon regime with a 505-nm excitation laser through spinning disk confocal microscopy, at 2 vol s −1 . For functional connectivity experiments, an intensity of 1.4 mW mm −2 at the sample plane was used to image GCaMP6s, well below the threshold needed to excite the GUR-3/PRDX-2 optogenetic system 24 . We note that at this wavelength and intensity, animals exhibited very little spontaneous calcium activity.
For certain analyses (Fig. 6 ), recordings with ample spontaneous activity were desired. In those cases, we increased the 505-nm intensity sevenfold, to approximately 10 mW mm −2 , and recorded from AML320 strains that lacked exogenous GUR-3/PRDX-2 to avoid potential widespread neural activation. Under these imaging conditions, we observed population-wide slow stereotyped spontaneous oscillatory calcium dynamics, as previously reported 35 , 68 .
Extraction of calcium activity from the images
Calcium activity was extracted from the raw images by using Python libraries implementing optimized versions of a previously described algorithm 69 , available at https://www.github.com/leiferlab/pumpprobe , https://www.github.com/leiferlab/wormdatamodel , https://www.github.com/leiferlab/wormneuronsegmentation-c and https://www.github.com/leiferlab/wormbrain .
The positions of neurons in each acquired volume were determined by computer vision software implemented in C++. This software was greatly optimized to identify neurons in real time, to also enable closed-loop targeting and stimulus delivery (as described in ‘Stimulus delivery and pulsed laser’). Two design choices made this algorithm considerably faster than previous approaches. First, a local maxima search was used instead of a slower watershed-type segmentation. The nuclei of C. elegans neurons are approximately spheres and so they can be identified and separated by a simple local maxima search. Second, we factorized the three-dimensional (3D) local maxima search into multiple two-dimensional (2D) local maxima searches. In fact, any local maximum in a 3D image is also a local maximum in the 2D image in which it is located. Local maxima were therefore first found in each 2D image separately, and then candidate local maxima were discarded or retained by comparing them to their immediate surroundings in the other planes. This makes the algorithm less computationally intensive and fast enough to also be used in real time. We refer to this type of algorithm as ‘pseudo’-segmentation because it finds the centre of neurons without fully describing the extent and boundaries of each neuron.
After neural locations were found in each of the volumetric images, a nonrigid point-set registration algorithm was used to track their locations across time, matching neurons identified in a given 3D image to the neurons identified in a 3D image chosen as reference. Even worms that are mechanically immobilized still move slightly and contract their pharynx, thereby deforming their brain and requiring the tracking of neurons. We implemented in C++ a fast and optimized version of the Dirichelet–Student’s- t mixture model (DSMM) 70 .
Calcium pre-processing
The GCaMP6s intensity extracted from the images undergoes the following pre-processing steps. (1) Missing values are interpolated on the basis of neighbouring time points. Missing values can occur when a neuron cannot be identified in a given volumetric image. (2) Photobleaching is removed by fitting a double exponential to the baseline signal. (3) Outliers more than 5 standard deviations away from the average are removed from each trace. (4) Traces are smoothed using a causal polynomial filtering with a window size of 6.5 s and polynomial order of 1 (Savitzky–Golay filters with windows completely ‘in the past’; for example, obtained with scipy.signal.savgol_coeffs(window_length=13, polyorder=1, pos=12)). This type of filter with the chosen parameters is able to remove noise without smearing the traces in time. Note that when fits are performed (for example, to calculate kernels), they are always performed on the original, non-smoothed traces. (5) Where Δ F / F 0 of responses is used, F 0 is defined as the value of F in a 30-s interval before the stimulation time and Δ F ≡ F − F 0 . In Fig. 2a , for example, refers to the mean over a 30-s post-stimulus window.
Stimulus delivery and pulsed laser
For two-photon optogenetic targeting, we used an optical parametric amplifier (OPA; Light Conversion ORPHEUS) pumped by a femtosecond amplified laser (Light Conversion PHAROS). The output of the OPA was tuned to a wavelength of 850 nm, at a 500 kHz repetition rate. We used temporal focusing to spatially restrict the size of the two-photon excitation spot along the microscope axis. A motorized iris was used to set its lateral size. For temporal focusing, the first-order diffraction from a reflective grating, oriented orthogonally to the microscope axis, was collected (as described previously 71 ) and travelled through the motorized iris, placed on a plane conjugate to the grating. To arbitrarily position the two-photon excitation spot in the sample volume, the beam then travelled through an electrically tunable lens (Optotune EL-16-40-TC, on a plane conjugate to the objective), to set its position along the microscope axis, and finally was reflected by two galvo-mirrors to set its lateral position. The pulsed beam was then combined with the imaging light path by a dichroic mirror immediately before entering the back of the objective.
Most of the stimuli were delivered automatically by computer control. Real-time computer vision software found the position of the neurons for each volumetric image acquired, using only the tagRFP-T channel. To find neural positions, we used the same pseudo-segmentation algorithm described above. The algorithm found neurons in each 2D frame in around 500 μs as the frames arrived from the camera. In this way, locations for all neurons in a volume were found within a few milliseconds of acquiring the last frame of that volume.
Every 30 s, a random neuron was selected among the neurons found in the current volumetric image, on the basis of only its tagRFP-T signal. After galvo-mirrors and the tunable lens set the position of the two-photon spot on that neuron, a 500-ms (300-ms for the unc-31 -mutant strain) train of light pulses was used to optogenetically stimulate that neuron. The duration of stimulus illumination for the unc-31 -mutant strain was selected to elicit calcium transients in stimulated neurons with a distribution of amplitudes such that the maximum amplitude was similar to those in WT-background animals, (Extended Data Fig. 2f ). The output of the laser was controlled through the external interface to its built-in pulse picker, and the power of the laser at the sample was 1.2 mW at 500 kHz. Neuron identities were assigned to stimulated neurons after the completion of experiments using NeuroPAL 27 .
To probe the AFD→AIY neural connection, a small set of stimuli used variable pulse durations from 100 ms to 500 ms in steps of 50 ms selected randomly to vary the amount of optogenetic activation of AFD.
In some cases, neurons of interest were too dim to be detected by the real-time software. For those neurons of interest, additional recordings were performed in which the neuron to be stimulated was manually selected on the basis of its colour, size and position. This was the case for certain stimulations of neurons RID and AFD.
Characterization of the size of the two-photon excitation spot
The lateral ( xy ) size of the two-photon excitation spot was measured with a fluorescent microscope slide, and the axial ( z ) size was measured using 0.2-nm fluorescent beads (Suncoast Yellow, Bangs Laboratories), by scanning the z -position of the optogenetic spot while maintaining the imaging focal plane fixed (Extended Data Fig. 2a ).
We further tested our targeted stimulation in two ways: selective photobleaching and neuronal activation. First, we targeted individual neurons at various depths in the worm’s brain, and we illuminated them with the pulsed laser to induce selective photobleaching of tagRFP-T. Extended Data Fig. 2c,d shows how our two-photon excitation spot selectively targets individual neurons, because it photobleaches tagRFP-T only in the neuron that we decide to target, and not in nearby neurons. To faithfully characterize the spot size, we set the laser power such that the two-photon interaction probability profile of the excitation spot would not saturate the two-photon absorption probability of tagRFP-T. Second, we showed that our excitation spot is restricted along the z -axis by targeting a neuron and observing its calcium activity. When the excitation was directed at the neuron but shifted by 4 μm along z , the neuron showed no activation. By contrast, the neuron showed activation when the spot was correctly positioned on the neuron (Extended Data Fig. 2e ). To further show that our stimulation is spatially restricted to an individual neuron more broadly throughout our measurements, we show that stimulations do not elicit responses in most of the close neighbours of the targeted neurons (Extended Data Fig. 2i and Supplementary Information ).
Inclusion criteria
Stimulation events were included for further analysis if they evoked a detectable calcium response in the stimulated neuron (autoresponse). A classifier determined whether the response was detected by inspecting whether the amplitude of both the Δ F / F 0 transient and its second derivative exceeded a pair of thresholds. The same threshold values were applied to every animal, strain, neuron and stimulation event, and were originally set to match the human perception of a response above noise. Stimulation events that did not meet both thresholds for a contiguous 4 s were excluded. The RID responses shown in Fig. 4 and Extended Data Fig. 7c are an exception to this policy. RID is visible on the basis of its CyOFP expression, but its tagRFP-T expression is too dim to consistently extract calcium signals. Therefore, in Fig. 4 and Extended Data Fig. 7c (but not in other figures, such as Fig. 2 ), downstream neurons’ responses to RID stimulation were included even in cases in which it was not possible to extract a calcium-activity trace in RID.
Neuron traces were excluded from analysis if a human was unable to assign an identity or if the imaging time points were absent in a contiguous segment longer than 5% of the response window owing to imaging artefacts or tracking errors. A different policy applies to dim neurons of interest that are not automatically detected by the pseudo-segmentation algorithm in the 3D image used as reference for the point-set registration algorithm. In those cases, we manually added the position of those neurons to the reference 3D image. If these ‘added’ neurons are automatically detected in most of the other 3D images, then a calcium activity trace can be successfully produced by the DSMM nonrigid registration algorithm, and is treated as any other trace. However, if the ‘added’ neurons are too dim to be detected also in the other 3D images and the calcium activity trace cannot be formed for more than 50% of the total time points, the activity trace for those neurons is extracted from the neuron’s position as determined from the position of neighbouring neurons. In the analysis code, we refer to these as ‘matchless’ traces, because the reference neuron is not matched to any detected neuron in the specific 3D image, but its position is just transformed according to the DSMM nonrigid deformation field. In this way, we are able to recover the calcium activity of some neurons whose tagRFP-T expression is otherwise too dim to be reliably detected by the pseudo-segmentation algorithm. Responses to RID stimulation shown in Fig. 4 and Extended Data Fig. 7c are an exception to this policy. In these cases, the activity of any neuron for which there is not a trace for more than 50% of the time points is substituted with the corresponding ‘matchless’ trace, and not just for the manually added neurons. This is important to be able to show responses of neurons such as ADL, which have dim tagRFP-T expression. In the RID-specific case, to exclude responses that become very large solely because of numerical issues in the division by the baseline activity owing to the dim tagRFP-T, we also introduce a threshold excluding Δ F / F > 2.
Kernels were computed only for stimulation-response events for which the automatic classifier detected responses in both the stimulated and the downstream neurons. If the downstream neuron did not show a response, we considered the downstream response to be below the noise level and the kernel to be zero.
Statistical analysis
We used two statistical tests to identify neuron pairs that under our stimulation and imaging conditions can be deemed ‘functionally connected’, ‘functionally non-connected’ or for which we lack the confidence to make either determination. Both tests compare observed calcium transients in each downstream neuron to a null distribution of transients recorded in experiments lacking stimulation.
To determine whether a pair of neurons can be deemed functionally connected, we calculated the probability of observing the measured calcium response in the downstream neuron given no neural stimulation. We used a two-sided Kolmogorov–Smirnov test to compare the distributions of the downstream neuron’s Δ F / F 0 amplitude and its temporal second derivative from all observations of that neuron pair under stimulation to the empirical null distributions taken from control recordings lacking stimulation. P values were calculated separately for Δ F / F 0 and its temporal second derivative, and then combined using Fischer’s method to report a single fused P value for each neuron pair. Finally, to account for the large number of hypotheses tested, a false discovery rate was estimated. From the list of P values, each neuron was assigned a q value using the Storey–Tibshirani method 40 . q values are interpreted as follows: when considering an ensemble of putative functional connections of q values all less than or equal to q c , an approximately q c fraction of those connections would have appeared in a recording that lacked any stimulation.
To explicitly test whether a pair of neurons are functionally not connected, taking into account the amplitude of the response, their reliability, the number of observations and multiple hypotheses, we also computed equivalence P eq and q eq values. This assesses the confidence of a pair not being connected. We test whether our response is equivalent to what we would expect from our control distribution using the two one-sided t-test (TOST) 72 . We computed P eq values for Δ F / F 0 and its temporal second derivative for a given pair being equivalent to the control distributions within an . Here, is the standard deviation of the corresponding control distribution. We then combined the two P eq values into a single one with the Fisher method and computed q eq values using the Storey–Tibshirani method 40 . Note that, different from the regular P values described above, the equivalence test relies on the arbitrary choice of ε , which defines when we call two distributions equivalent. We chose a conservative value of ε = 1.2 σ .
We note that the statistical framework is stringent and a large fraction of measured neuron pairs fail to pass either statistical test.
Measuring path length through the synaptic network
To find the minimum path length between neurons in the anatomical network topology, we proceeded iteratively. We started from the original binary connectome and computed the map of strictly two-hop connections by looking for pairs of neurons that are not connected in the starting connectome (the actual anatomical connectome at the first step) but that are connected through a single intermediate neuron. To generate the strictly three-hop connectome, we repeated this procedure using the binary connectome including direct and two-hop connections, as the starting connectome. This process continued iteratively to generate the strictly n -hop connectome.
In the anatomical connectome (the starting connectome for the first step in the procedure above), a neuron was considered to be directly anatomically connected if the connectomes of any of the four L4 or adult individuals in refs. 1 and 6 contained at least one synaptic contact between them. Note that this is a permissive description of anatomical connections, as it considers even neurons with only a single synaptic contact in only one individual to be connected.
Fitting kernels
Kernels k i j ( t ) were defined as the functions to be convolved with the activity Δ F j of the stimulated neuron to obtain the activity Δ F i of a responding neuron i , such that . To fit kernels, each kernel k ( t ) was parametrized as a sum of convolutions of decaying exponentials where the indices i , j are omitted for clarity and θ is the Heaviside function. This parametrization is exact for linear systems, and works as a description of causal signal transmission also in nonlinear systems. Note that increasing the number of terms in the successive convolutions does not lead to overfitting, as would occur by increasing the degree of a polynomial. Overfitting could occur by increasing the number of terms in the sum, which in our fitting is constrained to be a maximum of 2. The presence of two terms in the sum allows the kernels to represent signal transmission with saturation (with c 0 and c 1 of opposite signs) and assume a fractional-derivative-like shape.
The convolutions are performed symbolically. The construction of kernels as in equation ( 1 ) starts from a symbolically stored, normalized decaying exponential kernel with a factor . Convolutions with normalized exponentials are performed sequentially and symbolically, taking advantage of the fact that successive convolutions of exponentials always produce a sum of functions in the form ∝ θ ( t ) t n e − γ t . Once rules are found to convolve an additional exponential with a function in that form, any number of successive convolution can be performed. These rules are as follows: If the initial term is a simple exponential with a given factor (not necessarily just the normalization γ ) and γ i ≠ γ n , then the convolution is with and γ μ = γ i , γ ν = γ n . If the initial term is a simple exponential and γ i = γ n , then with c μ = c i γ i and γ μ = γ i . If the initial term is a term and γ i = γ μ , then with and γ μ = γ i . If the initial term is a term and γ i ≠ γ μ , then where , and .
Additional terms in the sum in equation ( 1 ) can be introduced by keeping track of the index m of the summation for every term and selectively convolving new exponentials only with the corresponding terms.
Kernel-based simulations of activity
Using the kernels fitted from our functional data, we can simulate neural activity without making any further assumptions about the dynamical equations of the network of neurons. To compute the response of a neuron i to the stimulation of a neuron j , we simply convolve the kernel k i , j ( t ) with the activity Δ F j ( t ) induced by the stimulation in neuron j . The activity of the stimulated neuron can be either the experimentally observed activity or an arbitrarily shaped activity introduced for the purposes of simulation.
To compute kernel-derived neural activity correlations (Fig. 6 ), we completed the following steps. (1) We computed the responses of all the neurons i to the stimulation of a neuron j chosen to drive activity in the network. To compute the responses, for each pair i , j , we used the kernel averaged over multiple trials. For kernel-based analysis, pairs with connections of q > 0.05 were considered not connected. We set the activity Δ F j ( t ) in the driving neuron to mimic an empirically observed representative activity transient. (2) We computed the correlation coefficient of the resulting activities. (3) We repeated steps 1 and 2 for a set of driving neurons (all or top-n neurons, as in Fig. 6 ). (4) For each pair k , l , we took the average of the correlations obtained by driving the set of neurons j in step 3.
Anatomy-derived simulations of activity
Anatomy-derived simulations were performed as described previously 47 . In brief, this simulation approach uses differential equations to model signal transmission through electrical and chemical synapses and includes a nonlinear equation for synaptic activation variables. We injected current in silico into individual neurons and simulated the responses of all the other neurons. Anatomy-derived responses (Fig. 3 ) of the connection from neuron j to neuron i were computed as the peak of the response of neuron i to the stimulation of j . Anatomy-based predictions of spontaneous correlations in Fig. 6 were calculated analogously to kernel-based predictions.
In one analysis in Fig. 3d , the synapse weights and polarities were allowed to float and were fitted from the functional measurements. In all other cases, synapse weights were taken as the scaled average of three adult connectomes 1 , 6 and an L4 connectome 6 , and polarities were assigned on the basis of a gene-expression analysis of ligand-gated ionotropic synaptic connections that considered glutamate, acetylcholine and GABA neurotransmitter and receptor expression, as performed in a previous study 37 and taken from CeNGEN 38 and other sources. Specifically, we used a previously published dataset (S1 data in ref. 37 ) and aggregated polarities across all members of a cellular subtype (for example, polarities from source AVAL and AVAR were combined). In cases of ambiguous polarities, connections were assumed to be excitatory, as in the previous study 37 . For other biophysical parameters we chose values commonly used in C. elegans modelling efforts 9 , 30 , 47 , 73 .
Characterizing stereotypy of functional connections
To characterize the stereotypy of a neuron pair’s functional connection, its kernels were inspected. A kernel was calculated for every stimulus-response event in which both the upstream and downstream neuron exhibited activity that exceeded a threshold. At least two stimulus-response events that exceeded this threshold were required to calculate their stereotypy. The general strategy for calculating stereotypy was to convolve different kernels with the same stimulus inputs and compare the resulting outputs. The similarity of two outputs is reported as a Pearson’s correlation coefficient. Kernels corresponding to different stimulus-response events of the same pair of neurons were compared with one another round-robin style, one round-robin each for a given input stimulus. For inputs we chose the set of all stimuli delivered to the upstream neuron. The neuron-pairs stereotypy is reported as the average Pearson’s correlation coefficient across all round-robin kernel pairings and across all stimuli.
Rise time of kernels
The rise time of kernels, shown in Fig. 5c and Extended Data Fig. 6d , was defined as the interval between the earliest time at which the value of the kernel was 1/ e its peak value and the time of its peak (whether positive or negative). The rise time was zero if the peak of the kernel was at time t = 0. However, saturation of the signal transmission can make kernels appear slower than the connection actually is. For example, the simplest instantaneous connection would be represented by a single decaying exponential in equation ( 1 ), which would have its peak at time t = 0. However, if that connection is saturating, a second, opposite-sign term in the sum is needed to fit the kernel. This second term would make the kernel have a later peak, thereby masking the instantaneous nature of the connection. To account for this effect of saturation, we removed terms representing saturation from the kernels and found the rise time of these ‘non-saturating’ kernels.
Screen for purely extrasynaptic-dependent connections
To find candidate purely extrasynaptic-dependent connections, we considered the pairs of neurons that are connected in WT animals ( q WT < 0.05) and non-connected in unc-31 animals ( < 0.05, with the additional condition q unc−31 > 0.05 to exclude very small responses that are nonetheless significantly different from the control distribution). We list these connections and provide additional examples in Extended Data Fig. 9 .
Using a recent neuropeptide–GPCR interaction screen in C. elegans 52 and gene-expression data from CeNGEN 38 , we find putative combinations of neuropeptides and GPCRs that can mediate those connections (Supplementary Table 1 ). We produced such a list of neuropeptide and GPCR combinations using the Python package Worm Neuro Atlas ( https://github.com/francescorandi/wormneuroatlas ). In the list, we only include transcripts from CeNGEN detected with the highest confidence (threshold 4), as described previously 51 . For each neuron pair, we first searched the CeNGEN database for neuropeptides expressed in the upstream neuron, then identified potential GPCR targets for each neuropeptide using information from previous reports 52 , 74 , and finally went back to the CeNGEN database to find whether the downstream neuron in the pair was among the neurons expressing the specific GPCRs. The existence of potential combinations of neuropeptide and GPCR putatively mediating signalling supports our observation that communication in the candidate neuron pairs that we identify can indeed be mediated extrasynaptically through neuropeptidergic machinery.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Discussion
Signal propagation in C. elegans measured by neural activation differs from model predictions based on anatomy, in part because anatomy does not account for wireless connections such as the extrasynaptic release of neuropeptides 49 .
By directly evoking calcium activity on a timescale of seconds, extrasynaptic signalling serves a functional role similar to that of classical neurotransmitters and contributes to neural dynamics. This role is in addition to its better-characterized role in modulating neural excitability over longer timescales.
Peptidergic extrasynaptic signalling relies on diffusion and therefore may be uniquely well suited to C. elegans ’ small size. Mammals also express neuropeptides and receptors, including in the cortex 50 , but their larger brains might limit the speed, strength or spatial extent of peptidergic extrasynaptic signalling.
Plasticity, neuromodulation, neural-network state, experience dependence and other longer-timescale effects might contribute to variability in our measured responses or to discrepancies between anatomical and functional descriptions of the C. elegans network. A future direction will be to search for latent connections that might become functional only during certain internal states.
Our signal propagation map provides a lower bound on the number of functional connections ( Supplementary Information ). Our measurements required a trade-off between the animal’s health and its transgenic load. To express the necessary transgenes, we generated a strain that is not behaviourally wild type; its signal propagation might therefore also differ from the wild type. To probe nonlinearities and multi-neuron interactions in the network, future measurements are needed of the network’s response to simultaneous stimulation of multiple neurons.
Our signal propagation map reports effective connections, not direct connections. Effective connections are useful for the circuit-level questions that motivate our work, such as how a stimulus in one part of the network drives activity in another. Direct connections are suited for questions of gene expression, development and anatomy, but less so for network function. For example, a direct connection between two neurons could be slow or weak, but might overlook a fast and strong effective connection via other paths through the network.
We used a connectome-constrained biophysical model to provide additional evidence to support our claim that measured signal propagation differs from expectations based on anatomy. The model relies on assumptions of timescales, nonlinearities and other parameters that, if incorrect, would contribute to the observed disagreement between anatomy and function. But even without any biophysical model, discrepancies between anatomy and function are apparent; for example, in pairs of neurons with synaptic connections that are functionally non-connected (Fig. 2g ), and in strong functional connections between RID and other neurons that have only weak, variable and indirect synaptic connections (Fig. 4 ). The challenge of confidently constraining model parameters from anatomy highlights the need for functional measurements, like the ones performed here. These functional measurements fill in fundamental gaps in the translation from anatomical connectome to neural activity. An alternative approach for comparing structure and function would be to infer properties of direct connections from the measured effective connections 55 , but this might require a higher signal-to-noise ratio than our current measurements.
The signal propagation atlas presented here informs structure–function investigations at both the circuit and the network level, and enables more accurate brain-wide simulations of neural dynamics. The finding that extrasynaptic peptidergic signalling, which is invisible to anatomy, evokes neural dynamics in C. elegans will inform ongoing discussions about how to characterize other brains in more detail and on a larger scale. | Establishing how neural function emerges from network properties is a fundamental problem in neuroscience 1 . Here, to better understand the relationship between the structure and the function of a nervous system, we systematically measure signal propagation in 23,433 pairs of neurons across the head of the nematode Caenorhabditis elegans by direct optogenetic activation and simultaneous whole-brain calcium imaging. We measure the sign (excitatory or inhibitory), strength, temporal properties and causal direction of signal propagation between these neurons to create a functional atlas. We find that signal propagation differs from model predictions that are based on anatomy. Using mutants, we show that extrasynaptic signalling not visible from anatomy contributes to this difference. We identify many instances of dense-core-vesicle-dependent signalling, including on timescales of less than a second, that evoke acute calcium transients—often where no direct wired connection exists but where relevant neuropeptides and receptors are expressed. We propose that, in such cases, extrasynaptically released neuropeptides serve a similar function to that of classical neurotransmitters. Finally, our measured signal propagation atlas better predicts the neural dynamics of spontaneous activity than do models based on anatomy. We conclude that both synaptic and extrasynaptic signalling drive neural dynamics on short timescales, and that measurements of evoked signal propagation are crucial for interpreting neural function.
Measurements of signal propagation in more than 23,000 pairs of neurons from nematode worms show that predictions of neural function made on the basis of anatomy are often incorrect, in part owing to the effects of extrasynaptic signalling.
Subject terms | Main
Brain connectivity mapping is motivated by the claim that “nothing defines the function of a neuron more faithfully than the nature of its inputs and outputs” 2 . This approach to revealing neural function drives large-scale efforts to generate connectomes—anatomical maps of the synaptic contacts of the brain—in a diverse set of organisms, ranging from mice 3 to Platynereis 4 . The C. elegans connectome 1 , 5 , 6 is the most mature of these efforts, and has been used to reveal circuit-level mechanisms of sensorimotor processing 7 , 8 , to constrain models of neural dynamics 9 and to make predictions of neural function 10 .
Anatomy, however, omits key aspects of neurons’ inputs and outputs, or leaves them ambiguous: the strength and sign (excitatory or inhibitory) of a neural connection are not always evident from wiring or gene expression. Many mammalian neurons release both excitatory and inhibitory neurotransmitters, and functional measurements are thus required to disambiguate their connections 11 . For example, starburst amacrine cells release both GABA (γ-aminobutyric acid) and acetylcholine 12 ; neurons in the dorsal raphe nucleus release both serotonin and glutamate 13 ; and neurons in the ventral tegmental area release two or more of dopamine, GABA and glutamate 14 . The timescale of neural signalling is also ambiguous from anatomy. In addition, anatomy disregards changes to neural connections from plasticity or neuromodulation; for example, in the head compass circuit in Drosophila 15 or in the crab stomatogastric ganglion 16 , respectively. Both mechanisms serve to strengthen or to select subsets of neural connections out of a menu of possible latent circuits. Finally, anatomy ignores neural signalling that occurs outside the synapse, as explored here. These ambiguities or omissions all pose challenges for revealing neural function from anatomy.
A more direct way to probe neural function is to measure signal propagation by perturbing neural activity and measuring the responses of other neurons. Measuring signal propagation captures the strength and sign of neural connections reflecting plasticity, neuromodulation and even extrasynaptic signalling. Moreover, direct measures of signal propagation allow us to define mathematical relations that describe how the activity of an upstream neuron drives activity in a downstream neuron, including its temporal response profile. Historically, this and related perturbative approaches have been called many names ( Supplementary Information) , but they all stand in contrast to correlative approaches that seek to infer neural function from activity correlations alone. Correlative approaches do not directly measure causality and are limited to finding relations among only those neurons that happen to be active. Perturbative approaches measure signal propagation directly, but previous efforts have been restricted to selected circuits or subregions of the brain, and have often achieved only cell-type and not single-cell resolution 17 – 22 .
Here we use neural activation to measure signal propagation between neurons throughout the head of C. elegans at single-cell resolution. We survey 23,433 pairs of neurons—the majority of the possible pairs in the head—to present a systematic atlas. We show that functional measurements better predict spontaneous activity than anatomy does, and that peptidergic extrasynaptic signalling contributes to neural dynamics by performing a functional role similar to that of a classical neurotransmitter.
Population imaging and single-cell activation
To measure signal propagation, we activated each single neuron, one at a time, through two-photon stimulation, while simultaneously recording the calcium activity of the population at cellular resolution using spinning disk confocal microscopy (Fig. 1 ). We recorded activity from 113 wild-type (WT)-background animals, each for up to 40 min, while stimulating a mostly randomly selected sequence of neurons one by one every 30 s. We spatially restricted our two-photon activation in three dimensions to be the size of a typical C. elegans neuron, to minimize off-target activation of neighbouring neurons (Extended Data Fig. 2a,c–e,i,j and Supplementary Information ). Animals were immobilized but awake, and pharyngeal pumping was visible during recordings. To overcome the challenges associated with spectral overlap between the actuator and the indicator, we used TWISP—a transgenic worm for interrogating signal propagation 23 , which expresses a purple-light actuator, GUR-3/PRDX-2 (refs. 24 , 25 ) and a nuclear-localized calcium indicator GCaMP6s (ref. 26 ) in each neuron (Fig. 1b and Extended Data Fig. 2b ), along with fluorophores for neural identification from NeuroPAL (ref. 27 ) (Fig. 1c ). Validation of the GUR-3/PRDX-2 system is discussed in the Supplementary Information (see also Extended Data Fig. 2h and Supplementary Video 1 ). A drug-inducible gene-expression system was used to avoid toxicity during development, resulting in animals that were viable but still significantly less active than WT animals 23 (see Methods ). A stimulus duration of 0.3 s or 0.5 s was chosen to evoke modest calcium responses (Extended Data Fig. 2f ), similar in amplitude to those evoked naturally by odour stimuli 28 .
Many neurons exhibited calcium activity in response to the activation of one or more other neurons (Fig. 1d ). A downstream neuron’s response to a stimulated neuron is evidence that a signal propagated from the stimulated neuron to the downstream neuron.
We highlight three examples from the motor circuit (Fig. 1e–g ). Stimulation of the interneuron AVJR evoked activity in AVDR (Fig. 1e ). AVJ had been predicted to coordinate locomotion after egg-laying by promoting forward movements 29 . The activity of AVD is associated with sensory-evoked (but not spontaneous) backward locomotion 7 , 8 , 30 , 31 , and AVD receives chemical and electrical synaptic input from AVJ 1 , 6 . Therefore, both wiring and our functional measurements suggest that AVJ has a role in coordinating backward locomotion, in addition to its previously described roles in egg-laying and forward locomotion.
Activation of the premotor interneuron AVER evoked activity transients in AVAR (Fig. 1f ). Both AVA 31 – 35 (Extended Data Fig. 2h ) and AVE 31 , 36 are implicated in backward movement. Their activities are correlated 31 , and AVE makes gap-junction and many chemical synaptic contacts with AVA 1 , 6 .
Activation of the turning-associated neuron SAADL 36 inhibited the activity of the sensory neuron OLLR. SAAD had been predicted to inhibit OLL, on the basis of gene-expression measurements 37 . SAAD is cholinergic and it makes chemical synapses to OLL, which expresses an acetylcholine-gated chloride channel, LGC-47 (refs. 6 , 38 , 39 ). Other examples consistent with the literature are reported in Extended Data Table 1 .
Signal propagation map
We generated a signal propagation map by aggregating downstream responses to stimulation from 113 C. elegans individuals (Fig. 2a ). We report the mean calcium response in a 30-s time window averaged across trials and animals (Extended Data Fig. 3a ). We imaged activity in response to stimulation for 23,433 pairs of neurons (66% of all possible pairs in the head). Measured pairs were imaged at least once, and some as many as 59 times (Extended Data Figs. 3b and 4a ). This includes activity from 186 of 188 neurons in the head, or 99% of all head neurons.
We developed a statistical framework, described in the Methods , to identify neuron pairs that can be deemed ‘functionally connected’ ( q < 0.05; Extended Data Fig. 4b ), ‘functionally non-connected’ ( q eq < 0.05; Extended Data Fig. 5b ) or for which we lack the confidence to make either determination. The statistical framework is conservative and requires consistent and reliable responses (or non-responses) compared to an empirical null distribution, considering effect size, sample size and multiple-hypothesis testing 40 to make either determination. Many neuron pairs fail to pass either statistical test, even though they often contain neural activity that, when observed in isolation, could easily be classified as a response (for example, AVJR→ASGR in Extended Data Fig. 4c ).
Our signal propagation map comprises the response amplitude and its associated q value (Fig. 2a and Extended Data Fig. 5a ) and can be browsed online ( https://funconn.princeton.edu ) through software built on the NemaNode platform 6 . A total of 1,310 of the 23,433 measured neuron pairs, or 6%, pass our stringent criteria to be deemed functionally connected at q < 0.05 (Fig. 2c ). Neuron pairs that are deemed functionally non-connected are reported in Extended Data Fig. 5b . Note that, in all cases, functional connections refer to ‘effective connections’ because they represent the propagation of signals over all paths in the network between the stimulated and the responding neuron, not just the direct (monosynaptic) connections between them.
C. elegans neuron subtypes typically consist of two bilaterally symmetric neurons, often connected by gap junctions, that have similar wiring 1 and gene expression 38 , and correlated activity 41 . As expected, bilaterally symmetric neurons are (eight times) more likely to be functionally connected than are pairs of neurons chosen at random (Fig. 2c ).
The balance of excitation and inhibition is important for a network’s stability 42 , 43 but has not to our knowledge been previously measured in the worm. Our measurements indicate that 11% of q < 0.05 functional connections are inhibitory (Fig. 2d ), comparable to previous estimates of around 20% of synaptic contacts in C. elegans 37 or around 20% of cells in the mammalian cortex 44 . Our estimate is likely to be a lower bound, because we assume that we only observe inhibition in neurons that already have tonic activity.
As expected from anatomy, neuron pairs that had direct (monosynaptic) wired connections were more likely to be functionally connected than were neurons with only indirect or multi-hop anatomical connections. Similarly, the likelihood of functional connections decreased as the minimal path length through the anatomical network increased (Fig. 2e ). Conversely, neurons that had large minimal path lengths through the anatomical network were more likely to be functionally non-connected than were neurons that had a single-hop minimal path length (Fig. 2g ). We investigated how far responses to neural stimulation penetrate into the anatomical network. Functionally connected ( q < 0.05) neurons were on average connected by a minimal anatomical path length of 2.1 hops (Fig. 2f ), suggesting that neural signals often propagate multiple hops through the anatomical network or that neurons are also signalling through non-wired means.
Most neuron pairs exhibited variability across trials and animals: downstream neurons responded to some instances of upstream stimulations but not others (Extended Data Fig. 6a ); and the response’s amplitude, temporal shape and even sign also varied (Extended Data Fig. 6b–e ). Some variability in the downstream response can be attributed to variability in the upstream neuron’s response to its own stimulation, called its autoresponse. To study the variability of signal propagation excluding variability from the autoresponse, we calculated a kernel for each stimulation that evoked a downstream response. The kernel gives the activity of the downstream neuron when convolved with the activity of the upstream neuron. The kernel describes how the signal is transformed from upstream to downstream neuron for that stimulus event, including the timescales of the signal transfer (Extended Data Fig. 6b,c ). We characterized the variability of each functional connection by comparing how these kernels transform a standard stimulus (Extended Data Fig. 6e ). Kernels for many neuron pairs varied across trials and animals, presumably because of state- and history-dependent effects 45 , including from neuromodulation 16 , 46 , plasticity and interanimal variability in wiring and expression. As expected, kernels from one neuron pair were more similar to each other than to kernels from other pairs (Extended Data Fig. 6f ).
Functional measurements differ from anatomy
We observed an apparent contradiction with the wiring diagram—a large fraction of neuron pairs with monosynaptic (single-hop) wired connections are deemed functionally non-connected in our measurements (Fig. 2g ). To further compare our measurements to anatomy, we sought to better understand what responses we should expect from the wiring diagram. Anatomical features such as synapse count are properties of only the direct (monosynaptic) connection between two neurons, but our signal propagation measurements reflect contributions from all paths through the network (Fig. 3a ). To compare the two, we relied on a connectome-constrained biophysical model that predicts signal propagation from anatomy, considering all paths. We activated neurons in silico and simulated the network’s predicted response using synaptic weights from the connectome 1 , 6 , polarities estimated from gene expression 37 and common assumptions about timescales and dynamics 47 .
The anatomy-derived biophysical model made some predictions that agreed with our measurements. Neuron pairs that the model predicted to have large responses (Δ V > 0.1) were significantly more likely to have larger measured responses than were those predicted to have little or no response (Δ V < 0.1) (Fig. 3b ), showing agreement between structure and function. Similarly, pairs of neurons that we measured to be functionally connected ( q < 0.05) are enriched for anatomy-predicted large responses (Δ V > 0.1) compared to pairs that our measurements deem functionally non-connected ( q eq < 0.05), (Fig. 3c , top).
Overall, however, there was fairly poor agreement between anatomy-based model predictions and our measurements. For example, we measured large calcium responses in neuron pairs that were predicted from anatomy to have almost no response (Fig. 3c ). There was also poor agreement between anatomy-based prediction and measurement when considering the response amplitudes of all neuron pairs (Fig. 3d , R 2 < 0, where an R 2 of 1 would be perfect agreement).
Fundamental challenges in inferring the properties of neural connections from anatomy could contribute to the disagreement between anatomical-based model predictions and our measurements. It is challenging to infer the strength and sign of a neural connection from anatomy when many neurons send both excitatory and inhibitory signals to their postsynaptic partner 11 , 37 . AFD→AIY, for example, expresses machinery for inhibiting AIY through glutamate, but is excitatory owing to peptidergic signalling 48 (Extended Data Fig. 2g ). We therefore wondered whether agreement between structure and function would improve if we instead fitted the strength and sign of the wired connections to our measurements. Fitting the weights and signs, given simplifying assumptions, but forbidding new connections that do not appear in the wiring diagram, improved the agreement between the anatomical prediction and the functional measurements, although overall agreement remained poor (Fig. 3d ). We therefore investigated whether additional functional connections exist beyond the connectome. We measured signal propagation in unc-31 -mutant animals, which are defective for extrasynaptic signalling mediated by dense-core vesicles, as explained below. Although agreement was still poor, signal propagation in these animals showed better agreement with anatomy than it did in WT animals (Fig. 3d ). This prompted us to consider extrasynaptic signalling further.
Extrasynaptic signalling also drives neural dynamics
Neurons can communicate extrasynaptically by releasing transmitters, often via dense-core vesicles, that diffuse through the extracellular milieu to reach downstream neurons instead of directly traversing a synaptic cleft ( Supplementary Information ). Extrasynaptic signalling forms an additional layer of communication not visible from anatomy 49 and its molecular machinery is ubiquitous in mammals 50 and C. elegans 38 , 51 , 52 .
To examine the role of extrasynaptic signalling, we measured the signal propagation of unc-31 -mutant animals defective for dense-core-vesicle-mediated release (Extended Data Fig. 7a ; 18 individuals) and compared the results with those from WT animals (browsable online at https://funconn.princeton.edu ). This mutation disrupts dense-core-vesicle-mediated extrasynaptic signalling of peptides and monoamines by removing UNC-31 (CAPS), a protein involved in dense-core-vesicle fusion 53 .
We expected that most signalling in the brain visible within the timescales of our measurements (30 s) would be mediated by chemical or electrical synapses and would therefore be unaffected by the unc-31 mutation. Consistent with this, many individual functional connections that we observed in the WT case persisted in the unc-31 mutant (Extended Data Fig. 8 ). But if fast dense-core-vesicle-dependent extrasynaptic signalling were present, it should be observed only in WT and not in unc-31 -mutant individuals. Consistent with this, unc-31 animals had a smaller proportion of functional connections than did WT animals (Extended Data Fig. 7b ).
We investigated the neuron RID, a cell that is thought to signal to other neurons extrasynaptically through neuropeptides, and that has only few and weak outgoing wired connections 54 . RID had dim tagRFP-T expression, so we adjusted our analysis protocol for only this neuron, as described in the Methods . Many neurons responded to RID activation (Extended Data Fig. 7c ), including URX, ADL and AWB, three neuron subtypes that were predicted from anatomy to have no response (Fig. 4a ). These three neurons showed strong responses in WT animals but their responses were reduced or absent in unc-31 mutants (Fig. 4b–d ), consistent with dense-core-vesicle-mediated extrasynaptic signalling. The gene expression and wiring of these neurons also suggest that peptidergic extrasynaptic signalling is producing the observed responses. All three express receptors for peptides produced by RID (NPR-4 and NPR-11 for FLP-14 and PDFR-1 for PDF-1), and no direct (monosynaptic) wiring connects RID to URX, ADL or AWB: a minimum of two hops are required from RID to URXL or AWBR, and three from RID to ADLR. These shortest paths all rely on fragile single-contact synapses that appear in only one out of the four individual connectomes 6 . We conclude that RID signals to other neurons extrasynaptically, and that this is captured by signal propagation measurements but not by anatomy.
Extrasynaptic-dependent signal propagation screen
To identify new pairs of neurons that communicate purely extrasynaptically, we performed an unbiased screen and selected for neuron pairs that had functional connections in WT animals ( q < 0.05) but were functionally non-connected in unc-31 mutants ( q eq < 0.05). Fifty-three pairs of neurons met our criteria (Extended Data Fig. 9 ), and were therefore putative candidates for purely extrasynaptic signalling. This is likely to be a lower bound because many more pairs could communicate extrasynaptically but might not appear in our screen, either because they don’t meet our statistical threshold or because they communicate through parallel paths, of which only some are extrasynaptic. Other scenarios not captured by the screen, and additional caveats, are discussed in the Supplementary Information . The timescales of signal propagation for those neuron pairs that passed our screen were similar to that of all functional connections (Fig. 5a ), suggesting that in the worm, unc-31 -dependent extrasynaptic signalling can also propagate quickly.
Neuron pair M3L→URYVL is a representative example of a purely extrasynaptic-dependent connection found from our screen. There are no direct chemical or electrical synapses between M3L and URYVL, but stimulation of M3L evokes unc-31 -dependent calcium activity in URYVL (Fig. 5b ). The majority of neuron pairs identified in our screen express peptide and receptor combinations consistent with extrasynaptic signalling 38 , 52 (Supplementary Table 1 ). For example, M3L expresses FLP-4, which binds to the receptor NPR-4, expressed by URVYL; and FLP-5, which binds to the receptor NPR-11, also expressed by URYVL.
The bilateral neuron pair AVDR and AVDL was also identified in our screen for having purely extrasynaptic-dependent connections. AVDR and AVDL have no or only weak wired connections between them (three of four connectomes show no wired connections, and the fourth finds only a very weak gap junction), but stimulation of AVDR evoked robust unc-31 -dependent responses in AVDL. Notably, the AVD cell type was recently predicted to have a peptidergic autocrine loop 51 mediated by the neuropeptide–GPCR combinations NLP-10→NPR-35 and FLP-6→FRPR-8 (refs. 38 , 52 ) (Fig. 5c ). The bilateral extrasynaptic signalling that we observe is consistent with this prediction because two neurons that express the same autocrine signalling machinery can necessarily signal to one another. AVD was also predicted to be among the top 25 highest-degree ‘hub’ nodes in a peptidergic network based on gene expression 51 , and, in agreement, AVD is highly represented among hits in our screen (Extended Data Fig. 9b ).
Signal propagation predicts spontaneous activity
A key motivation for mapping neural connections is to understand how they give rise to collective neural dynamics. We tested the ability of our signal propagation map to predict worms’ spontaneous activity, and compared this to predictions from anatomy (Fig. 6 ). Spontaneous activity was measured in immobilized worms lacking optogenetic actuators under bright imaging conditions. A matrix of bare anatomical weights (synapse counts) was a poor predictor of the correlations of spontaneous activity (left bar, Fig. 6 ), consistent with previous reports 27 , 41 . The connectome-constrained biophysical model from Fig. 3 better predicted spontaneous activity correlations (middle bars, Fig. 6 ; described in the Methods )—as we would expect because it considers all anatomical paths through the network—but it still performed fairly poorly. Predictions based on our functional measurements of signal propagation kernels (right bars, Fig. 6 ) performed best of all at predicting spontaneous activity correlations. To generate predictions of correlations either from the biophysical model or from our functional kernel measurements required the activity of a set of neurons to be driven in silico. For the biophysical model, driving all neurons was optimal, but for the kernel-based predictions, driving a specific set of six neurons (‘top-n’) markedly improved performance. We conclude that functionally derived predictions based on our measured signal propagation kernels better agree with spontaneous activity than do either a bare description of anatomical weights or an established model constrained by the connectome, and that some subsets of neurons make outsized contributions to driving spontaneous dynamics. The kernel-based simulation (interactive version at https://funsim.princeton.edu ) outperforms other models of neural dynamics presumably for two reasons: first, it extracts all relevant parameters directly from the measured kernels, thereby avoiding the need for many assumptions; and second, it captures extrasynaptic signalling not visible from anatomy.
Online content
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41586-023-06683-4.
Supplementary information
Source data
| Extended data figures and tables
Extended data
is available for this paper at 10.1038/s41586-023-06683-4.
Supplementary information
The online version contains supplementary material available at 10.1038/s41586-023-06683-4.
Acknowledgements
We thank J. Bien, A. Falkner, F. Graf Leifer, M. Murthy, E. Naumann, H. S. Seung and J. Shaevitz for comments on the manuscript. Online visualization software and hosting was created by research computing staff in the Lewis-Sigler Institute for Integrative Genomics and the Princeton Neuroscience Institute, with particular thanks to F. Kang, R. Leach, B. Singer, S. Heinicke and L. Parsons. Research reported in this work was supported by the National Institutes of Health National Institute of Neurological Disorders and Stroke under New Innovator award number DP2-NS116768 to A.M.L.; the Simons Foundation under award SCGB 543003 to A.M.L.; the Swartz Foundation through the Swartz Fellowship for Theoretical Neuroscience to F.R.; the National Science Foundation through the Center for the Physics of Biological Function (PHY-1734030); and the Boehringer Ingelheim Fonds to S.D. Strains from this work are being distributed by the CGC, which is funded by the NIH Office of Research Infrastructure Programs (P40 OD010440).
Author contributions
A.M.L. and F.R. conceived the investigation. F.R., S.D. and A.M.L. contributed to the design of the experiments and the analytical approach. F.R. and S.D. conducted the experiments. A.K.S. designed and performed all transgenics. F.R. designed and built the instrument and the analysis framework and pipeline. F.R. and S.D. performed the bulk of the analysis with additional contributions from A.M.L. and A.K.S. All authors wrote and reviewed the manuscript. F.R. is currently at Regeneron Pharmaceuticals. F.R. contributed to this article as an employee of Princeton University and the views expressed do not necessarily represent the views of Regeneron Pharmaceuticals.
Peer review
Peer review information
Nature thanks Mei Zhen and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Data availability
Machine-readable datasets containing the measurements from this work are publicly accessible through on Open Science Foundation repository at 10.17605/OSF.IO/E2SYT. Interactive browsable versions of the same data are available online at https://funconn.princeton.edu and http://funsim.princeton.edu . CeNGeN data 38 were accessed through http://www.cengen.org/cengenapp/ . Source data are provided with this paper.
Code availability
All analysis code is publicly available at https://github.com/leiferlab/pumpprobe (10.5281/zenodo.8312985), https://github.com/leiferlab/wormdatamodel (10.5281/zenodo.8247252), https://github.com/leiferlab/wormneuronsegmentation-c (10.5281/zenodo.8247242) and https://github.com/leiferlab/wormbrain (10.5281/zenodo.8247254). Hardware acquisition code is available at https://github.com/leiferlab/pump-probe-acquisition (10.5281/zenodo.8247258).
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:34:57 | Nature. 2023 Nov 1; 623(7986):406-414 | oa_package/70/2d/PMC10632145.tar.gz |
|||
PMC10655799 | 38024459 | Introduction
Social, economic, environmental and structural disparities contribute to health inequities between and within communities. 1 , 2 Those who are socially disadvantaged have less access to health services, suffer from more illnesses and die at a younger age. 3 Due to unjust social structures and institutionalized racism, Black patients experience enduring and pervasive health disparities. Cultural distrust between Black patients and health care providers has been triggered by a history of medical mistreatment and social inequity. 4 Positive health care outcomes are dependent upon interpersonal trust between patients and health care providers, as well as social trust in health care organizations. 5 Related to interpersonal trust is the acceptance of recommended treatment, satisfaction with care and self-reported health improvement. 6 Given the expanding scope of pharmacy practice, interpersonal trust between patients and pharmacists has become particularly important.
Pharmacists have the opportunity to play a major role in addressing unmet patient care needs to reduce some pressure on the health care system as it faces new challenges. 7 , 8 In Nova Scotia, the importance of pharmacists in the health care system has increased due to the burden of COVID-19, lengthy emergency room wait times and a shortage of primary care providers. 9 , 10 To alleviate some of these stressors, pharmacist-led health care clinics have been established in Nova Scotia. These clinics offer services such as managing chronic diseases (e.g., diabetes and asthma), as well as diagnosing and treating some acute conditions (e.g., urinary tract infections and strep throat). 11
As the role of pharmacists expands, the delivery of culturally safe care for Black patients using these services needs to be a priority. Although systemic differences in health care delivery and their effects on health outcomes are well recognized, little is known about how these manifest in pharmacy practice. Given the scarcity of research in this area, we aimed to explore the experiences of Black Nova Scotians with community pharmacists. | Methods
Study design
This was a qualitative study that used focus groups and one-on-one interviews. This dual approach was used due to availability of participants during a substantial wave of COVID-19 in our jurisdiction. Interviews and focus group sessions took place virtually on Zoom for Healthcare, which has additional privacy measures to protect the identities of participants. Sessions were audio-recorded, transcribed and then analyzed thematically. This study was approved by the Nova Scotia Health Research Ethics Board (REB 32435).
Participants
Black Nova Scotians at least 18 years of age who have had interactions with community pharmacists were invited to participate in this research. Flyers advertising the study were posted on social media platforms such as Facebook, Instagram and Twitter. Local and provincial organizations such as Promoting Leadership in Health for African Nova Scotians (PLANS), Delmore Buddy Daye Learning Institute (DBDLI), Imhotep’s Legacy Academy (ILA) and the Pharmacy Association of Nova Scotia (PANS) were contacted to distribute the study flyer. In total, 16 volunteers ( n = 16) participated in this study.
Semi-structured interviews and focus group moderation
Interviews and focus group sessions were conducted by the principal investigator and lasted a maximum of 2 hours. Interviews consisted of semi-structured, open-ended questions ( Appendix 1 ). Prior to the focus group sessions and one-on-one interviews, informed consent, demographic data and pledge of confidentiality forms were completed.
Data collection and analysis
Interviews and focus group sessions were audio-recorded and transcribed by the principal investigator. Field notes were completed after each focus group and interview to highlight key words or phrases expressed by participants. NVIVO 12 software was used to facilitate data analysis. 12 Data were analyzed thematically using a constant comparative method to compare responses of participants in each focus group and individual interviews. This involved identifying codes, combining codes into categories and then identifying themes. 13 Thematic analysis was conducted by the principal investigator and reviewed by the research team.
Rigour and trustworthiness
Measures were taken to increase the trustworthiness of this research. A reflective journal was used during data collection and analysis to record thoughts and codes, as well as identify any assumptions and biases with potential to affect the data analysis process. 14 Peer debriefing sessions with research team members were also held to further explore the data, provide new insight and challenge any interpretations made by the principal investigator. 15 , 16 Anonymous quotes from participants were used to show the authenticity of the findings. Last, an audit trail was kept to document the development of the completed analysis of the data. 17 | Results
Two focus groups, each with 5 participants and 6 one-on-one interviews ( n = 16), were conducted between May and June 2021. The majority of participants were female and between the ages of 18 and 35, and most had resided in Nova Scotia since birth. All participants had completed formal education at the college or university level. A comparison of the number of new codes that emerged from each session is shown in Table 1 . Three major themes, 9 categories and 54 codes were identified. Code overlap between focus groups and interviews is presented in Table 2 . The 3 major themes were as follows: difficulties navigating a pharmacy as a Black person, lack of inclusivity and cultural competence in the pharmacy and transactional relationships with pharmacists ( Table 3 ). A list of illustrative quotes organized by themes and categories is given in Table 4 .
Difficulties navigating a pharmacy as a Black person
Many of the conversations in the focus groups and one-on-one interviews were centred on the barriers and difficulties of navigating the pharmacy as someone who is visibly Black. Categories under this theme were Consciousness of Blackness, Microaggressions and stereotyping and If I was white, it wouldn’t be this complicated.
Consciousness of Blackness
“The moment that I wake up and I interact or talk to someone, I’m always like hypervigilant of the fact that I’m Black. Yep, so it’s like being careful by the way that I present myself and what I say and how I say it and how people are going to perceive me.”
Participants felt that their race negatively affected the quality of care they received from pharmacists. In many areas of their lives, they described looking at themselves through the eyes of a racist society and this was no different in the pharmacy. This heightened awareness added a layer of anxiety, as many of them had fears of being perceived negatively, which could affect the care that they receive.
Microaggressions and stereotyping
Microaggressions and stereotypes were identified as contributing to the poor quality of care received by participants from pharmacists. These instances were often described as being subtle and woven into the routine at the pharmacy. Female participants described having to police their emotions within the pharmacy out of fear of being labeled as the stereotypical “angry Black woman.” Other stereotypes described by participants included drug-seeking behaviour or being less educated or poor. In anticipation of discriminatory behaviour from pharmacy team members, participants described changing the way they spoke and behaved. Most felt that if they did not change the way in which they spoke, they would be labelled as uneducated and information could be withheld from them because the pharmacist might assume that they would not understand it.
If I was white, it wouldn’t be this complicated
A significant amount of time in each of the interviews and focus group sessions was spent discussing the way in which race complicated the pharmacy experience for participants. They described how even completing a simple task at the pharmacy, such as picking up a prescription, left many of them feeling drained. For participants, there was a sense of mental preparation that needs to happen before going into the pharmacy. Additionally, due to their lack of trust in the health care system, many participants were skeptical of the information provided by health care practitioners.
Lack of inclusivity and cultural competence in the pharmacy
This theme included the categories Lack of representation, Cultural competence education for pharmacists and building genuine relationships with Black patients.
Lack of representation
The lack of Black pharmacists and pharmacy team members was an issue highlighted by all participants. It was discussed that the health care system was not made for Black people to thrive in and how lack of inclusivity and diversity reinforces this idea for many. Only 1 participant in this study had had an interaction with a Black pharmacist before. All participants felt that diversity was crucial for creating safe spaces for Black participants. Many stated that having a Black pharmacist would change the experience for them in many ways. They would feel a more genuine connection with the pharmacist. They also wouldn’t feel like they had to change the way they spoke or presented themselves, also known as code switching, in the pharmacy. Participants also explained that they would feel understood because of shared experiences and trauma. For participants, even having a pharmacist who was of a racially visible minority, not just Black, would make the space feel safer and more accessible. This would help them feel more comfortable asking questions, building relationships and learning more about the role of a pharmacist.
Cultural competence education for pharmacists
All participants felt that pharmacists are not equipped with the education to provide culturally safe care for Black people and emphasized the importance of pharmacists and pharmacy learners to complete antiracism and antioppression education. Many highlighted that this education would also help minimize microaggressions and make practitioners aware of their implicit biases and internalized racism. Additionally, many felt it was important for pharmacists, especially those who work in proximity to historical African Nova Scotian communities, to become educated on the history of these communities.
Building genuine relationships with Black patients
Participants discussed that Black communities are often skeptical of outreach from health care practitioners due to mistrust between the community and the health care system. They emphasized that any effort to build this relationship must be genuine and not performative. To ensure the interaction is safe, participants highlighted that pharmacists need to make themselves aware of their implicit biases and make strides towards unlearning these beliefs before reaching out to Black people. Participants felt that this would result in genuine efforts to create safe spaces for Black people within the pharmacy, leading to improved relationships with the pharmacist. They also envisioned that this awareness would help create fair opportunities for all people accessing the pharmacy, as many participants believed that instances of being treated poorly stemmed from pharmacists not being aware of their internalized racism. | Discussion
Three major themes were identified in this research: 1) Difficulties navigating a pharmacy as a Black person, 2) Lack of inclusivity and cultural competence in the pharmacy and 3) Transactional relationships with pharmacists. These 3 themes refer to the inadequate care provided by pharmacists to Black patients as well as feelings of discomfort and hostility experienced by Black patients accessing community pharmacies.
The theme Difficulty navigating the pharmacy as a Black person is consistent with what is known from the literature for other health care providers such as physicians. This research suggests the following: 1) Black people are significantly more likely to believe their race negatively affects their health care and 2) Black patients are less trusting of their provider compared to white patients, which negatively affects the likeliness of Black people using these services. 18 - 20 These findings are similar to what was seen in our study.
Culturally competent care means delivering effective care to patients who have diverse beliefs, values and behaviours. 21 , 22 Participants did not feel pharmacists were culturally competent and therefore were hesitant to explore and make use of pharmacists. This is similar to findings from other research with other health care providers that suggests that lack of cultural competence can lead to patient dissatisfaction, lower quality of care and a negative impact on utilization of services and adherence to recommendations from practitioners. 23 Cultural competence is a crucial component of patient-centred care and is needed for reducing racial disparities in health care. 24 Many interventions can be used to increase the cultural competency of an organization, such as making it a priority, increasing diversity, involving the community and investigating and reporting disparities that may exist. 25
One concept that was rarely seen in the literature for other health care practitioners was that some participants described their relationship with the pharmacist as strictly transactional. The literature describes the idea of transactional care within medicine, specifically nursing. 26 However, this is different from our findings where participants described the role of the pharmacist as being similar to that of a cashier. This may be because of the traditional community pharmacist roles, such as dispensing. Over time, pharmacists have transitioned into a role where they are more integrated into the patient care process and recognized as medication management experts. Despite this expansion in the scope of practice of pharmacists, participants described pharmacists as a bridge between their doctor and their medication. Most participants also described feeling too unsafe or uncomfortable in the pharmacy to explore what else the pharmacist could do for them. Pharmacists often describe themselves as the most accessible health care practitioners; however, if the pharmacy is a place where some do not feel safe enough to explore and learn about all of these options, then who are they accessible to? 7 , 27
Strengths and limitations
This research had several strengths. To our knowledge, this is the first study to explore the attitudes and experiences of Black people with community pharmacists in North America. Participants were interested and engaged, which allowed for the collection of rich, detailed information on a topic that is understudied. This information is relevant to pharmacy practice and provides insights into how practice needs to change to create safe spaces for Black people accessing pharmacy services. The idea that theoretical data saturation was reached is supported by the identification of a total of only 9 codes after all 6 one-on-one interviews ( Table 2 ).
Accessibility was the primary limitation of this study. Advertisements to participate in this research, as well as the interviews and focus group sessions themselves, were conducted virtually due to the COVID-19 pandemic. This means that those who did not have access to a phone or computer and Internet service were not able to participate in this study. Furthermore, with 1 exception, all participants were female. All participants had completed postsecondary education, and the majority were between the ages of 18 and 35. As such, their experiences may differ from others, potentially reducing the generalizability of the results. However, data from studies with other health care professionals examining a wider demographic of Black people show similar results. 28 - 31 Additionally, pharmacists work in patient care areas outside of community pharmacy; however, this research only looked at the experiences of Black Nova Scotians with community pharmacists. Last, there may also have been selection bias of individuals who have had bad experiences at community pharmacy. | Conclusion
This research describes the lived experiences of Black patients with community pharmacists. It also highlights the fact that in the current pharmacy setting, many pharmacists do not provide culturally safe care to Black patients. Unless effort is made to understand systemic racism, perceived discrimination and the influences of social determinants of health in the context of pharmaceutical care, health disparities will continue to harm Black patients accessing pharmacy services. The task at hand is not to challenge these attitudes but rather to develop ways to build trust, create safe spaces and dismantle the institutionalized and systemic racism that plagues our health care system. ■ | Although previous research has shown that systemic racism and discrimination is a significant contributor to the poorer health outcomes experienced by Black patients, there is little research within North America evaluating the impact on Black patients accessing pharmacy services specifically. We were interested in pursuing this research because as pharmacists’ scope of practice continues to expand, there must be an understanding of systemic racism and its impacts in pharmacy practice.
Bien que de précédentes recherches ont montré que le racisme et la discrimination systémiques contribuent de manière significative aux moins bons résultats de santé chez les patients de race noire, il existe peu de recherches en Amérique du Nord qui évaluent spécifiquement l’incidence sur les patients de race noire qui accèdent à des services pharmaceutiques. Nous avons souhaité poursuivre cette recherche, car à mesure que le champ de pratique des pharmaciens continue de s’élargir, il devient nécessaire de comprendre le racisme systémique et son incidence sur la pratique de la pharmacie.
Background:
A history of medical abuse and social inequality confounded by persistent racial discrimination in health care has triggered mistrust between Black patients and health care providers. Although the consequences of systemic racism on health outcomes are well understood, little is known about how they manifest in pharmacy practice. The objective of this study was to explore the experiences of Black Nova Scotians with community pharmacists.
Methods:
This was a qualitative study that used focus groups and one-on-one interviews. Black Nova Scotians 18 years of age and older who have had interactions with community pharmacists were invited to participate. Focus groups and interviews were audio-recorded, transcribed and analyzed thematically.
Results:
Two focus groups ( n = 10) and 6 one-on-one interviews were held between May and June 2021. Three major themes were identified: 1) difficulties navigating a pharmacy as a Black person, 2) lack of inclusivity and cultural competence in the pharmacy and 3) transactional relationships with pharmacists.
Discussion:
Most participants felt their race negatively affected the quality of care they received from the pharmacist and that pharmacists were not culturally competent. Most participants did not consider pharmacists to be part of their health care team and described feeling unsafe or uncomfortable in the pharmacy.
Conclusions:
Pharmacists have an important role in closing the health equity gap. This research highlights the need for pharmacy education to include cultural competence and will be used to guide strategies to improve access to culturally safe pharmacy services for Black Nova Scotians. | Transactional relationships with pharmacists
This theme included the categories Lack of relationships with pharmacists, Lack of privacy in the pharmacy and Loyalty.
Lack of relationships with pharmacists
The transactional relationship between participants and their pharmacist was a major discussion item. Most participants spoke about not having a relationship with their pharmacist and seeing them as simply a bridge between their doctor and their medications. When asked who their first contact would be if they had questions about their medication, most participants said they would either try to contact their doctor or look the information up on Google. When probed further, some participants explained they would never contact their pharmacist with a question pertaining to their medication and others listed pharmacists as a last line option. When asked if they saw their pharmacist as part of their health care team, most participants said no.
A major contributor to participants seeing the relationship with their pharmacist as strictly transactional was that many considered pharmacists to be cold, too clinical and lacking empathy. Participants described leaving the pharmacy feeling frustrated and dismissed due to the lack of connection between them and their pharmacist. Many participants recognized that pharmacists were busy and that the pharmacy seemed understaffed; however, they felt that if their family doctor was able to make a genuine effort to build a relationship with them, then why couldn’t their pharmacist? When asked about medication counselling, some participants did not understand the importance of these counsels. Most described them as short and rushed.
There was a lack of understanding among participants of the role and scope of practice of a pharmacist. For example, many did not understand the reason for the questions that pharmacists asked, such as the indication for their medications or if they had had the medication before. Due to this lack of understanding, some participants felt that pharmacists were intrusive and overstepping their role.
Lack of privacy in the pharmacy
One of the main issues for participants was the lack of privacy in the pharmacy. Many described feeling uneasy about receiving medication counselling at the counter. Often, participants discussed that they were unable to pay attention to what they were being told by their pharmacist due to anxiety that someone around them was listening. This was particularly an issue for participants who used pharmacies in small communities. Some participants explained that they had rarely been asked to go into the counselling room. Others explained that the only times they had been offered the counselling room was when it was for a medication or topic the pharmacist thought was sensitive. The lack of privacy coupled with the general sense of uneasiness in the pharmacy made participants feel hesitant to ask questions.
Loyalty
Most participants spoke about being loyal to 1 pharmacy, mainly because of convenience. However, there were a few participants who were loyal to 1 pharmacy because of long-term relationships they had with particular pharmacists. As a result, participants felt comfortable and safe with these pharmacists. These were the few instances where participants spoke highly of pharmacists and described their relationship as being more than transactional. Some participants referred to these pharmacists as their family pharmacist. Even when participants moved away from these pharmacies, they continued to use them as their regular pharmacy—1 participant would drive for almost an hour to her pharmacy to see her long-term pharmacist. This participant described travelling these distances because she felt respected and attended to by these pharmacists. Participants described not feeling this same safety and comfort anywhere else and therefore would not seek other pharmacists for questions or advice.
Relevance to practice
As health care practitioners, pharmacists have a role in closing the health equity gap. To begin this work, pharmacists must first listen to the needs of those affected by it. 32 Through listening, work to amend relationships and build trust between Black patients and pharmacy team members can begin, with the goal of improving health outcomes. Participants put forward the following recommendations to improve their experience with pharmacists and to create a safe space within the pharmacy. These recommendations should be used by stakeholders, educators and pharmacists to guide work that needs to be done to improve the experiences of Black Nova Scotians with community pharmacists.
1. Increased representation within the pharmacy by hiring more Black pharmacists, pharmacy assistants and technicians
2. Antiracism and antioppression education for pharmacist, pharmacy staff and learners
3. Community outreach by pharmacists to the Black community
4. Creating safe spaces within the pharmacy for Black customers
Supplemental Material | We thank the participants who volunteered to take part in this research and all organizations and individuals who advertised this research. | CC BY | no | 2024-01-16 23:36:46 | Can Pharm J (Ott). 2023 Oct 12; 156(6):316-323 | oa_package/9e/76/PMC10655799.tar.gz |
PMC10686531 | 38034939 | Introduction
Modern dual-chamber pacemakers (DCPPM) typically operate in atrial-based timing when programmed DDD, where the atrial–atrial (A–A) interval is fixed and the ventriculo-atrial interval (VAI, also known as the atrial escape interval) varies. 1 However, when certain devices are set to DDI or automatic mode switch (AMS) to non-tracking DDI programming, pacing rate is determined by ventricular-based timing where the calculated VAI is fixed and the A–A can vary. 1 In this case, VAI is calculated based on the base rate and the programmed paced atrioventricular (AV) delay. 1 For example, if the set lower rate is 60 b.p.m. (1000 ms) with paced AV delay set at 250 ms, the calculated VAI would be equal to 750 ms (1000 − 250 = 750). When native AV conduction is shorter than programmed AV delay, the VAI will remain the same but the A–A will shorten. This can lead to atrial pacing above the lower rate. For the previous example, if native AV conduction was 120 ms, the calculated VAI would remain 750 ms, and the effective atrial pacing rate would be 120 + 750 = 870 ms (68 b.p.m.). While this phenomenon is well described, there is a lesser known scenario where prolonged intrinsic AV conduction can also lead to atrial pacing above the set rate. 2 | Discussion
Jafri et al. 2 reported a unique case of rapid atrial pacing at 150 b.p.m. in an Abbott device set DDI with lower rate of 90 b.p.m., which started with a series of PACs that fell just outside PVAB of the preceding AP event; and with prolonged AV time, led to pacing ‘crossover’ with delayed VS event occurring just after the next AP event. In our case, a similar mechanism was the culprit, but was even more serendipitous. The mechanism depended on transitioning from DDD to DDI mode during an atrial tachyarrhythmia, a perfectly timed PAC, prolonged intrinsic AV conduction, and rate responsive pacing. Since calculated VAI was ∼260 ms with a programmed AV time of 250 ms, the intended sensor driven rate (A–A) would have been equal to VAI + AV = 510 ms, meaning that the device was calculating VAI with the intent of effective atrial pacing at 510 ms (118 b.p.m.). This elevated pacing rate, in the setting of prolonged intrinsic AV conduction, led to the ‘crossover’ that generated rapid pacing. In DDI mode without rate responsive pacing, the device would have calculated VAI with intent to pace at the set lower rate of 70 b.p.m. (857 ms) that would have yielded a calculated VAI of 607 ms, which would have given ample time for the initial PAC to conduct before VAI was reached. In fact, any calculated VAI longer than 320 ms (the intrinsic AV conduction after the PAC) would have prevented crossover and therefore prevented rapid atrial pacing. This means that the episode could only happen with enabled rate responsive pacing for a target heart rate > 105 b.p.m. | Conclusion
With ventricular-based pacing modes, atrial pacing at rates surpassing the lower set rate is a known phenomenon when intrinsic AV conduction is shorter than programmed paced AV delay. 1 While Jafri et al. previously reported a case of elevated atrial pacing in the setting of prolonged AV conduction, we report a unique case with similar mechanism, but which also required an atrial tachyarrhythmia causing a mode switch, as well as rate adaptive pacing. This series of events has not been previously reported, requiring a perfect storm of tachyarrhythmia-induced AMS to ventricular-based timing, a precisely timed single fortuitous PAC, sustained slow pathway conduction, and an elevated sensor driven rate exceeding 105 b.p.m. | Conflict of interest: None declared.
Abstract
Background
While ventricular-based timing modes are known to cause elevated atrial pacing above the lower rate when intrinsic atrioventricular (AV) conduction is shorter than programmed AV delay, there is one case report in 2015 by Jafri et al. where rapid atrial pacing was induced in an Abbott device set DDI with a lower rate of 90 by an unsensed premature atrial complex and slow intrinsic AV conduction allowing pacemaker ‘crossover.’
Case summary
We present a very unusual case of rapid atrial pacing at >180 b.p.m. due to a perfect storm of events that we believe has not been previously reported. A patient with a St. Jude Abbott DCPPM set DDDR had an atrial tachyarrhythmia causing a mode switch to DDIR, which uses ventricular-based timing. This was followed by a period of rapid atrial pacing that terminated spontaneously.
Discussion
This phenomenon depended on an initial atrial tachyarrhythmia causing a mode switch to DDIR. In addition, the set lower rate would not have led to a short enough calculated ventriculo-atrial interval (VAI), but because rate responsive pacing was enabled, the calculated VAI was short enough to promote the crossover in setting of slow AV conduction and allow the rapid atrial pacing. Understanding this unique mechanism requires careful attention to pacemaker timing cycles and appreciation of the limitations of device programming. While it appears that a similar phenomenon was reported once in the literature, we believe that this episode of rapid atrial pacing was even more serendipitous due to the unlikely series of events required for its inception. | Summary figure
Case report
A 74-year-old man with sick sinus syndrome and a St. Jude Abbott DCPPM, placed 8 months prior, presented for routine device check. Presenting rhythm was atrial paced (AP) ventricular sensed (VS) at 70 b.p.m. Atrial and ventricular impedance, sensing, and thresholds were appropriate and consistent with prior values. Device settings were DDDR with base rate 70 b.p.m., maximum tracking rate 105 b.p.m., and maximum sensor rate 130 b.p.m., and paced and sensed AV delays were 250 ms. Ventricular intrinsic preference was enabled, allowing intermittent AV delay to promote intrinsic conduction and mitigate ventricular pacing. A ventricular high rate episode with corresponding electrogram (EGM) was noted ( Figure 1 ). This episode commenced with a 1:1 atrial tachycardia (AT) at a rate of 150 b.p.m. with prolonged AV conduction, prompting AMS from DDDR to DDIR. Unexpectedly, this was followed by a period of rapid atrial pacing at 180–190 b.p.m., well exceeding the maximum tracking and sensor rates. Notably, there was no atrial anti-tachycardia pacing programmed on this device, and the native AV conduction was longer than the programmed AV delay. In addition, the observed AV interval during rapid atrial pacing appeared implausibly short. To understand the mechanism for this phenomenon, one must consider how pacemaker timing cycle parameters determine the pacing rate.
In ventricular-based timing modes, the cycle length between two atrial-paced events is determined by the calculated VAI, not the A–A interval as in atrial-based timing. 1 This calculation depends on the base rate, or sensor indicated rate, and the programmed paced AV delay. 1 This patient’s device was initially set to DDDR. While in an episode of AT, the device mode switched from atrial-based timing mode to a ventricular-based timing mode (DDIR). As shown in Figure 2 , the initial AT cycle length was ∼395 ms triggering AMS to DDIR, and intrinsic AV time was prolonged at 285 ms. Then, a perfectly timed premature atrial complex (PAC) fell within the post-ventricular atrial blanking period (PVAB) and was not sensed, thus failing to reset the VAI timer. With further AV nodal decrement, this PAC conducted with an AV time of 320 ms. However, the calculated VAI (atrial escape interval) had already elapsed at 258 ms, leading to an AP beat just before the PAC conducted to a ventricular-sensed event. This ‘pacemaker crossover’ created a short ‘pseudo-AV interval’ of 66 ms, and the ventricular-sensed event from the conducted PAC resets the VAI timer just after the AP beat. Ventriculo-atrial interval was then reached again at ∼260 ms, triggering another AP event with even slower AV time of 398 ms, once again ‘crossing over’ with AP occurring before VS event from the preceding paced beat. This consistent but prolonged AV conduction led to repetitive crossover with short pseudo-AV intervals resetting VAI. The VAI was calculated based on sensed activity, due to enabled rate responsive pacing, and an assumed paced AV time of 250 ms, leading to atrial pacing well above the max sensor rate.
This event was stored as a ventricular high rate episode because the ventricular rate exceeded 175 b.p.m.; however, it lasted <20 s, and the stored EGM did not capture the termination. Presumably, it terminated with an atrial-paced beat that failed to conduct, or due to a spontaneous PAC or pemature ventricular contraction. Importantly, there were no associated symptoms with this brief episode, nor were there any other similar episodes recorded before or within 6 months following. To prevent this episode from happening again, multiple programming changes could be made including changing the AMS from DDIR to a non-sensor setting such as DDI or VVI; reducing the maximum sensor driven rate, or shortening the programmed AV delay. However, each of these changes would have other implications. Therefore, another option would be to make no programming changes given that this was a single short episode with no associated symptoms. Even if lightning did strike twice, these episodes would be unlikely to sustain for any significant length of time given reliance on extremely long yet necessarily fixed 1:1 AV conduction. In the case of this patient, the company representative was contacted and agreed with our proposed mechanism and potential programming changes to prevent recurrence. However, ultimately no changes were made, and the phenomenon has not recurred since this initial episode. | Lead author biography
After completing medical school at Dartmouth in 2016, I completed internal medicine residency at Vanderbilt in 2019 and general cardiology residency at Wake Forest in 2022. I am currently a rising second year electrocardiology fellow at Wake Forest University School of Medicine.
Consent: Informed consent for patient information and images to be published was provided by the patients in accordance with COPE guidelines.
Funding: None declared.
Data availability
All data and findings to support this case report are included in the article. | CC BY | no | 2024-01-16 23:35:07 | Eur Heart J Case Rep. 2023 Nov 21; 7(11):ytad586 | oa_package/4e/64/PMC10686531.tar.gz |
||
PMC10694898 | 38044421 | Introduction
As one of the most aggressive and deadly malignancies, pancreatic ductal adenocarcinoma (PDAC) has become the fourth leading cause of cancer-related death and is expected to advance to the second leading cause of cancer-related death within decades [ 1 , 2 ]. Despite the advances in the understanding of the molecular mechanisms and development of therapies for PDAC over the past few decades, its 5-year survival rate remains the lowest among all malignancies [ 3 ]. There is continuous proliferative signal transduction during the development of PDAC, such continuous proliferation induced by oncogene expression could cause DNA replication stress, resulting in genomic instability and even apoptosis [ 4 ]. Therefore, cancer cells respond to DNA damage via activation of DNA damage response (DDR) pathways, mainly including base excision repair, nucleotide excision repair, mismatch repair, homologous recombination and non-homologous terminal junction [ 5 ]. During the cell cycle, more than 6 million DNA base pairs are replicated, and this process can be affected by many sources of damage and replication stress [ 6 ]. DNA repair endows tumors with potent genomic stability and antiapoptotic ability, which can easily promote the malignant progression of tumors [ 7 ]. In addition, various chemotherapies for PDAC can cause specific types of DNA damage; for example, platinum alkylating agents and the topoisomerase inhibitor irinotecan can lead to DNA double-strand breaks (DSB), and the antimetabolic drugs 5-fluorouracil (5-FU) and gemcitabine (GEM) can cause single-base damage and single-strand DNA breaks (SSBs)[ 8 ], which can develop into DSBs upon accumulation [ 9 ]. To date, PARP inhibitors and cell cycle checkpoint inhibitors have proven to be effective therapies for PDAC [ 10 ]. Studies on DDR mechanism defects and inhibitors of DNA damage repair may provide new insights for the treatment of PDAC.
CircRNAs are a class of noncoding RNAs and are single-stranded, circular, closed RNAs widely present in eukaryotic cells [ 11 ]. CircRNAs are formed from pre-mRNAs of their host genes through selective back-splicing and circularization, and they have high stability and cannot be easily degraded by RNA enzymes [ 12 ]. CircRNAs have been found to be involved in multiple steps of tumorigenesis and development, including DDR regulation [ 13 ]. CircSMARCA5 terminates SMARCA5 transcription at exon 15 to reduce its expression, thereby inhibiting SMARCA5-mediated DNA damage repair and cisplatin resistance in breast cancer (BC)[ 14 ]. CircITCH sponges miR-330-5p to increase SIRT6 expression, and SIRT6 then activates PARP1 to repair DNA damage to alleviate doxorubicin-induced cardiomyocyte damage and dysfunction [ 15 ]. However, there are few studies on the relationships between circRNAs and PDAC, and most of these have focused on miRNA-related research. Circ-MBOAT2 promotes the proliferation, metastasis and glutamine metabolism of PDAC cells through the miR-433-3p/GOT1 axis [ 16 ]. Cancer-associated fibroblast-specific circ-FARP1 binds to CAV1 and inhibits its ubiquitination by ZNRF1 to enhance the secretion of LIF; in addition, circ-FARP1 sponges miR-660-3p to increase the expression of LIF, thereby promoting the stemness and GEM resistance of PDAC cells [ 17 ]. To date, studies on the regulation of circRNAs on DNA damage in PDAC have not been reported.
In this study, we identified the highly expressed hsa_circ_0007919 in GEM-resistant PDAC tissues and cells. Hsa_circ_0007919 could inhibit the DNA damage and apoptosis induced by GEM chemotherapy and maintain cell survival. We found that mechanistically, hsa_circ_0007919 recruits FOXA1 and TET1 to decrease the methylation of the LIG1 promoter and enhance LIG1 transcription, then LIG1 involves in multiple DNA repair pathways to decrease GEM-related DNA damage. The function of hsa_circ_0007919 was also verified in xenograft model in nude mice. | Materials and methods
Clinical tissue samples
A total of 95 pairs of PDAC tissues and adjacent tumor tissues were collected from Xuzhou Medical University Affiliated Hospital. Among them, 50 patients had not received radiotherapy or chemotherapy while 45 patients received GEM neoadjuvant therapy. This study was approved by Institutional Ethics Committee of Xuzhou Medical University Affiliated Hospital and informed consent were signed by all patients.
Cell culture and transfection
The normal human pancreatic duct cell line hTERT-HPNE and PDAC cell lines PANC-1, CFPAC-1, BxPC-3 and MIA-PaCa2 were purchased from Chinese Academy of Science (Shanghai, China) cultured RPMI 1640 medium (Hyclone, USA) containing 10% Fetal Bovine Serum (FBS) (Gibco,USA), 100u/ml penicillin and 100 μg/ml streptomycin (Beyotime, China) in a cell incubator at 37°C with 5% CO 2 . For the construction of GEM-resistant PDAC cell lines, PANC-1 and CFPAC-1 cells were cultured with GEM (MCE, USA) at increasing concentration gradients. For 5-AzaC treatment, CRC cells were treated with 5μM of 5-AzaC (MCE, USA) for 72 h. All small interfering RNA (siRNA) and full-length plasmid of hsa_circ_0007919, LIG1, FOXA1, TET1 and negative control were purchased from GenePharma (Suzhou, China) and transfected into cells using Lipofectamine 2000 reagent (Invitrogen, USA) according to the manufacturer’s protocol. All sequences of siRNAs are shown as follows:
si-hsa_circ_0007919#1: 5’-GACAGAUCCAGGUGGAAGCTT-3’;
si-hsa_circ_0007919#2: 5’-ACAGAUCCAGGUGGAAGCATT-3’;
si-LIG1#1: 5’- AGAAGAUAGACAUCAUCAAAG-3’;
si-LIG1#2: 5’- CGUCAUUUCUUUCAAUAAAUA-3’;
si-FOXA1: 5’- GGAUGUUAGGAACUGUGAAGA-3’;
si-TET1: 5’- CGAAGCUACUGCAAAUCAACA-3’;
si-Ctrl: 5’- UUCUCCGAACGUGUCACGUTT-3’.
RNA extraction and quantitative real-time PCR (qRT-PCR)
Total RNA was extracted from tissues and cells by RNA isolater Total RNA Extraction Kit (Vazyme, China), cDNA was synthesized by HiScript II Q RT SuperMix for qPCR (Vazyme, China) and the expression was detected by ChamQ SYBR qPCR Master Mix (Vazyme, China). All the data were normalized to GAPDH/U6 and the data from tissues were quantified by 2 −ΔCt method while others were quantified by 2 −ΔΔCt method. All the primers were synthesized by GENEray (Shanghai, China) and the sequences are shown as follows:
hsa_circ_0007919-F: 5’-AGGTGGAAGCAGGGAAAG-3’;
hsa_circ_0007919-R: 5’-TCATGGGCAGCAACAGG-3’;
ABR-F: 5’-GGTGGATTCCTTCGGCTAT-3’;
ABR-R: 5’-CACTTGGGCTCCGCTGT-3’;
LIG1-F: 5’-GCCCTGCTAAAGGCCAGAAG-3’;
LIG1-R: 5’-CATGGGAGAGGTGTCAGAGAG-3’;
FOXA1-F: GCAATACTCGCCTTACGGCT-3’;
FOXA1-R: TACACACCTTGGTAGTACGCC-3’;
TET1-F: CATCAGTCAAGACTTTAAGCCCT-3’;
TET1-R: CGGGTGGTTTAGGTTCTGTTT-3’;
LIG1 P1-F: GCTAAAACCTCCTCCCC-3’;
LIG1 P1-R: CATGAAGCATGTGACCG-3’;
GAPDH-F: 5’-GGAGCGAGATCCCTCCAAAAT-3’;
GAPDH-R: 5’-GGCTGTTGTCATACTTCTCATGG-3’;
U6-F: 5’-CTCGCTTCGGCAGCACA-3’;
U6-R: 5’-AACGCTTCACGAATTTGCGT-3’.
Identification of hsa_circ_0007919
qPCR product amplified by hsa_circ_0007919 primer was validated by Sanger-seq (Sangon, China). Total gDNA was extracted by FastPure Cell/Tissue DNA Isolation Mini Kit (Vazyme, China) and qPCR products amplified from cDNA and gDNA were separated in 1% agarose gel. Total RNA was treated with RNase R (Epicentre, USA) at 37°C for 30 min and were detected by qRT-PCR just as described above.
Half maximal inhibitory concentration (IC50) detection assay
A total of 12 groups of 4 × 10 3 PDAC cells were placed into 96-well plate separately, then cells were treated with GEM at concentration of 0, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.4, 12.8, 25.6, 51.2 and 102.4μM for 48 h and then detected as CCK-8 assay described.
CCK-8 assay
Cells after transfection or GEM treatment were collected and counted, then 4 × 10 3 cells were placed into 96-well plate and cultured in incubator at 37°C. 24, 48, 72 and 96 h after, cells were incubated with 100 μl serum-free medium and 10 μl CCK-8 solution (Glpbio, USA) at 37°C for 2 h and measured at 450 nm wavelength (SPARK, Switzerland).
Apoptosis detection assay
Cells after transfection or GEM treatment were collected by EDTA-free trypsin solution (Beyotime, China), then cells were washed by PBS and incubate with Annexin V and PI solution for 10 min and detected by the flow cytometer (BD, USA) according to the manufacturer’s protocol of Cell Apoptosis Detection Kit (Biosharp, China).
Western blot assay
Total protein was extracted from cells by RIPA lysis solution (Beyotime, China) and quantified by BCA Protein Assay Kit (Beyotime, China), protein was separated in SDS-PAGE and transferred to PVDF membrane (Millipore, Germany). After blocking with 5% skim milk, the membrane was incubated with primary antibodies and secondary antibodies and detected with Super ECL Detection Reagent (Yeasen, China) using a luminescent imaging system (Tanon, China). All used antibodies are shown as follows: anti-caspase3 (19677-1-AP, Proteintech, USA), anti-caspase9 (10380-1-AP, Proteintech, USA), anti-BCL2 (68103-1-Ig, Proteintech, USA), anti-GAPDH (60004-1-Ig, Proteintech, USA), HRP-goat anti-rabbit IgG (H + L) (BF03008, Biodragon, China), HRP-goat anti-mouse IgG (H + L) (BF03001, Biodragon, China), anti-γ-H2AX (AP0687, Abclonal, China), CoraLite594-conjugated Goat Anti-Rabbit IgG (H + L) (SA00013-4, Proteintech, USA), anti-LIG1 (18051-1-AP,, Proteintech, USA), anti-Ki67 (GB111499, Servicebio, China), anti-FOXA1 (GTX100308, GeneTex, USA), anti-TET1 (AB_2793752, Active Motif, USA), anti-QKI (13169-1-AP, Proteintech, USA).
Single cell gel electrophoresis
0.8% normal melting point agarose (Vicmed, China) was placed on glass slide, then 5 × 10 3 cells in 0.6% low melting point agarose (Biosharp, China) was place above and electrophoreted in a horizontal electrophoresis tank after lysis, at last cells were stained with PI solution (Biosharp, China) and the picture was photographed by inverted fluorescence microscope (Olympus, Japan).
DNA ladder assay
Total DNA of cells after transfection was extracted by FastPure Cell/Tissue DNA Isolation Mini Kit (Vazyme, China). Briefly, cells were collected and treated by RNase Solution and Proteinase K at room temperature. Then cells were mixed with buffer GB and anhydrous ethanol, after abstersion with washing buffer, DNA was dissolved into elution buffer. At last, DNA was separated in 1% agarose gel and photographed using luminescent imaging system (Tanon, China).
Immunofluorescence (IF)
Cells were fixed by 4% paraformaldehyde (Vicmed, China) and blocked by 5% Bovine Serum Albumin (BSA) (Solarbio, China), then cells were incubated with primary antibody, fluorescent secondary antibody and DAPI (Bioss, China) and photographed by confocal laser microscope (ZEISS, Germany).
Stable inhibition cell lines construction and xenograft model
PANC-1 and CFPAC-1 GEM-resistant cells were infected by hsa_circ_0007919 inhibition lentivirus (GenePharma, China) or negative control lentivirus and selected by puromycin (Solarbio, China) for over 2 weeks, the efficiency of lentivirus was detected by qPCR. 5 × 10 6 lentivirus-infected cells were injected into blank region of nude mice (Gempharmatech, China) and were treated with GEM (50 mg/kg, i.p.) every 4 days. After measuring volumes of tumors every 5 days, the mice were sacrificed and the tumors were harvested 25 days after injection. All sequences of shRNAs are shown as follows:
sh-hsa_circ_0007919: 5’-CACCGAGGTGGAAGCAGGGAAAGTTCGA AAAAATTGATCAATGCCGAGGA-3’;
sh-Ctrl: 5’-CACCGTTCTCCGAACGTGTCACGTTTCGAAAAACGTG ACACGTTCGGAGAA-3’.
Immunohistochemical (IHC)
Tumors were fixed by 4% paraformaldehyde, paraffin embed and sliced into sections, sections were hydrated by xylene and gradient alcohol (Sinoreagent, China). Antigen of sections were repaired by citrate solution and blocked by goat serum, then sections incubated with primary and secondary antibodies according to the manufacturer’s protocol of SP Kit (ZSGB-BIO, China) and stained by DAB Staining Kit (ZSGB-BIO, China) and hematoxylin solutions (Sinoreagent, China). The pictures of sections were photographed by inverted microscope (Olympus, Japan). Relative staining score was calculated using an IHC score analysis method according to the proportion of positively stained cells and the intensity of staining. The proportion of positive cells was scored as follows: 0 (0–5%), 1 (6–25%), 2 (26–50%), 3 (51–75%), 4 (> 75%) and the intensity was scored as follows: 0 (negative), 1 (weak), 2 (moderate), 3 (strong).
TUNEL assay
The TUNEL assay was performed according to the manufacturer’s protocol of TUNEL Apoptosis Detection Kit (Color Development) (Beyotime, China). Tissue sections was hydrated as described in IHC, after treated with Proteinase K at 37°C, the tissues were incubated with 3% hydrogen peroxide solution and biotin labeling solution containing TdT enzyme and biotein-dUDP away from light at 37°C, then the tissues were stained using Streptavidin-HRP solution and DAB staining solution and photographed by inverted microscope (Olympus, Japan).
Gene set enrichment analysis (GSEA)
GSEA was performed on the normalized data using the GSEA v2.0 tool ( http://www.broad.mit.edu/gsea/ ). We compared the expression of genes in PANC-1 GEM-resistant cells transfected with hsa_circ_0007919 siRNA or negative control siRNA. Three gene sets were used for analysis (KEGG_BASE_EXCISION_REPAIR, KEGG_MISMATCH_REPAIR, KEGG_NUCLEOTIDE_EXCISION_REPAIR), and the detailed genes in the gene sets can be found in MSigDB ( http://software.broadinstitute.org/gsea/msigdb/genesets.jsp ). The P values of the differences between the two gene sets were analyzed with the Kolmogorov–Smirnov test.
Fluorescence in situ hybridization (FISH)
Cells were fixed by 4% paraformaldehyde and incubated with hybridization solution containing probes at 37°C overnight, then cells were washed by SSC solution and stained with DAPI according to the manufacturer’s protocol of FISH Kit (RIBOBIO, China). The images were photographed by confocal laser microscope (ZEISS, Germany).
Nuclear-cytoplasmic fractionation assay
Nuclear and cytoplasmic RNA was extracted by Cytoplasmic & Nuclear RNA Purification Kit (Norgen Biotek, Canada) according to the manufacturer’s protocol, then the expression of hsa_circ_0007919 in nucleus and cytoplasm was detected by qPCR.
Chromatin isolation by RNA purification (ChIRP)
ChIRP was used to detect the protein binding with hsa_circ_0007919 and performed according to the manufacturer’s protocol of ChIRP kit (Bersinbio, China). Briefly, cells were cross-linked with paraformaldehyde and lysed through sonication, and then the lysis solution was incubated with biotin-labeled hsa_circ_0007919 probes (RIBOBIO, China) and magnetic beads, ultimately the protein was extracted and detected by WB.
Co-immunoprecipitation (Co-IP)
Total protein extracted by cell lysis buffer for IP (Beyotime, China) was incubated with antibodies and magnetic beads, binding proteins were extracted by 2×SDS-PAGE Sample Loading Buffer (Beyotime, China) and detected by WB.
RNA binding protein immunoprecipitation (RIP)
RIP assay was conducted according to the manufacturer’s protocol of RNA Immunoprecipitation Kit (GENESEED, China), mixture of RNA and protein was collected and incubated with antibodies and magnetic beads, RNA was extracted by adsorption column and detected by qPCR.
Chromatin immunoprecipitation (ChIP)
ChIP assay was conducted according to the manufacturer’s protocol of Simple ChIP Enzymatic Chromatin IP Kit (CST, USA). Cells were mixed by 1% paraformaldehyde and chromatin was digested into fragments by enzymes. Then the solution was incubated with antibodies and magnetic beads and DNA was extracted from beads by purified centrifugal column, the binding DNA fragments were detected by qPCR.
Methylation-specific PCR (MS-PCR)
Total DNA was extracted by FastPure Cell/Tissue DNA Isolation Mini Kit (Vazyme, China), then DNA was denaturated and bisulfite convered by EZ DNA Methylation-Gold Kit (ZYMO RESEARCH, USA) and amplificated by Methylation Specific PCR Kit (TIANGEN, China) according to the manufacturer’s protocol, the DNA was ultimately separated in 1% agarose gel and photographed by luminescent imaging system. Methylated primer and unmethylated primer located in LIG1 promoter were generated by MethPrimer 2.0 ( http://www.urogene.org/methprimer2/ ) and the sequences are shown as follows:
LIG1 M-F: 5’-GAGAAGAAGGTTCGTTTTCGTAG-3’;
LIG1 M-R: 5’-ATAAAATAAATAAAATACCCCGAAT-3’;
LIG1 U-F: 5’-GAGAAGAAGGTTTGTTTTTG-3’;
LIG1 U-R: 5’-AAAATAAAATAAATAAAATACCCCAAAT-3’.
Luciferase reporter assay
Luciferase reporter assay was conducted according to the manufacturer’s protocol of Dual Luciferase Reporter Gene Assay Kit (Beyotime, China). Cells transfected with pGL3-basic plasmid containing LIG1 promoter (Genecreate, China) and pRL-TK control plasmid with or without hsa_circ_0007919 inhibition were collected for lysis, then the firefly luciferase detection reagent and renilla luciferase detection reagent was added into the solution and measured by Multifunctional microplate reader (SPARK, Switzerland) separately.
Statistical analysis
All values are expressed as mean ± standard deviation (SD). The significance of the differences was measured by Student’s t-test or one-way ANOVA. Kaplan–Meier analysis was used for survival analysis, and the differences of survival probabilities were measured by the log-rank test. The correlations between the expression of hsa_circ_0007919 and various clinicopathological variables were analyzed by Chi-Squared test. p < 0.05 was considered significant. Statistical analyses were performed using SPSS version 25.0 (SPSS, Inc., USA). | Results
hsa_circ_0007919 is upregulated in GEM-resistant PDAC and predicts poor prognosis
We performed next-generation sequencing to identify circRNAs that contribute to GEM resistance in three GEM-resistant PDAC tissues and three GEM-sensitive PDAC tissues. A total of 62 circRNAs were differentially expressed (FC<-1 or > 1 and p < 0.05), and hsa_circ_0007919 was significantly upregulated in GEM-resistant PDAC tissues compared with GEM-sensitive tissues (log2FC = 4.213454, p = 0.000312, Fig. 1 A). Then, we measured the expression of hsa_circ_0007919 in 50 pairs of non-GEM-resistant PDAC tissues and adjacent tissues and 45 pairs of GEM-resistant PDAC tissues and related adjacent tissues. The results showed that the expression of hsa_circ_0007919 was markedly upregulated in GEM-resistant PDAC tissues compared with GEM-sensitive PDAC tissues (Fig. 1 B). Compared with that in normal human pancreatic duct cells, the expression of hsa_circ_0007919 was increased in PDAC cells, including PANC-1, CFPAC-1, BxPC-3 and MIA-Paca2, and its expression level was relatively high in PANC-1 and CFPAC-1 cells (Fig. 1 C). We next evaluated the structure of hsa_circ_0007919, which is derived from exons 3–16 of the ABR gene, and validated the circularization site of hsa_circ_0007919 by Sanger-seq (Fig. 1 D-E); the results were consistent with the data obtained from the circBase database ( https://www.circbase.org ). We also designed divergent and convergent primers to detect the expression of hsa_circ_0007919 in both cDNA and gDNA. The results showed that hsa_circ_0007919 could be amplified from cDNA but not gDNA (Fig. 1 F), and the resistance of hsa_circ_0007919 to digestion by the RNase R exonuclease confirmed that it was indeed circular (Fig. 1 G). At last, we explored data from clinical tissue samples to analyze the correlations between hsa_circ_0007919 expression and clinicopathological features. We divided the 80 PDAC patients with or without GEM treatment into two groups with high expression (40 samples) or low expression (40 samples) of hsa_circ_0007919. As shown in Table 1 , high expression of hsa_circ_0007919 was significantly correlated with vascular invasion ( p = 0.032), nerve invasion ( p = 0.039), T stage ( p = 0.018), lymph node metastasis ( p = 0.034) and TNM stage ( p = 0.003), while there was no prominent association of hsa_circ_0007919 expression with age, gender, tumor location, degree of differentiation. Moreover, analysis of the relationship of hsa_circ_0007919 expression with overall survival (OS) and disease-free survival (DFS) of GEM-resistant patients showed that high expression of hsa_circ_0007919 predicted poor OS and DFS ( p < 0.05) (Fig. 1 H-I). Furthermore, we divided the 40 GEM-treated PDAC patients into two groups with high expression (n = 20) or low expression (n = 20) of hsa_circ_0007919 and found that high expression of hsa_circ_0007919 similarly predicted poor OS and DFS ( p < 0.01) (Fig. 1 J-K).
hsa_circ_0007919 inhibits DNA damage and gemcitabine sensitivity
GEM is one of the common chemotherapy drugs in the clinical treatment of PDAC and can cause SSBs and DSBs by mediating base damage; however, PDAC patients often have adverse clinical outcomes due to chemoresistance [ 18 ]. Since hsa_circ_0007919 was upregulated in GEM-resistant PDAC tissues, we investigated its function in GEM-resistant cells. Firstly, we silenced hsa_circ_0007919 in PDAC cells and found that hsa_circ_0007919 inhibition increased GEM sensitivity (Fig. 2 A-B, S1 A). Then, we constructed GEM-resistant PDAC cell lines, PANC-1/GEM and CFPAC-1/GEM (Fig. 2 C-D), and found that hsa_circ_0007919 was highly expressed in these GEM-resistant cells (Fig. 2 E). Once again, we silenced hsa_circ_0007919 in both of these GEM-resistant cell lines and overexpressed it in normal PANC-1 and CFPAC-1 cells treated with GEM (Fig. S1 B-C), and the results of the CCK-8 assay, FCM assay and DNA Ladder assay indicated that hsa_circ_0007919 silencing decreased the proliferation and increased the apoptosis of cells, while hsa_circ_0007919 overexpression had the opposite effects (Fig. 2 F-K, S1D-G). Consistent with the apoptosis assay results, hsa_circ_0007919 silencing increased the levels of cleaved caspase 3 and cleaved caspase 9 and decreased BCL2 expression, while hsa_circ_0007919 overexpression decreased the levels of cleaved caspase 3 and cleaved caspase 9 and increased BCL2 expression (Fig. 2 L-O). GEM, which functions as a pyrimidine antimetabolic agent, can induce single-base damage and lead to DNA breaks, so we evaluated the influence of hsa_circ_0007919 on DNA damage and found that hsa_circ_0007919 silencing increased the tail of single cells in gel electrophoresis and the accumulation of γ-H2AX in the nucleus, while hsa_circ_0007919 overexpression decreased these parameters (Fig. 3 A-B). At last, we established xenograft model in nude mice and found that the volume and weight of tumors formed by hsa_circ_0007919-silenced PANC-1/GEM and PANC-1/GEM cells were decreased compared with those of tumors formed by control cells (Fig. 3 C-G). The IHC, TUNEL assay and qPCR results showed that hsa_circ_0007919 silencing decreased the expression of Ki-67 and increased the expression of caspase3 and γ-H2AX and cell apoptosis (Fig. 3 H-J, S2A-B). These results revealed that hsa_circ_0007919 enhances GEM resistance in PDAC cells by decreasing DNA damage to promote proliferation and reduce apoptosis.
hsa_circ_0007919 inhibits DNA damage through LIG1-mediated repair pathways
To confirm how hsa_circ_0007919 inhibits DNA damage and influences GEM sensitivity, we performed RNA-seq to identify the differentially expressed genes in hsa_circ_0007919-silenced PANC-1/GEM cells compared with control cells. There were 520 upregulated and 219 downregulated genes (Fig. 4 A), and KEGG analysis and GSEA showed that these genes were enriched in multiple DNA damage repair pathways, including base excision repair, mismatch repair and nucleotide excision repair (Fig. 4 B-E). LIG1 was the most significantly downregulated gene common to all of these pathways (Fig. 4 F, S2C-E). LIG1, a member of the DNA ligase family, has been reported to play an important role in DNA recombination in almost all DNA damage repair pathways [ 19 ]. Therefore, we measured the expression of LIG1 and found that it was also highly expressed in GEM-resistant PDAC tissues compared with normal PDAC tissues and was positively correlated with the expression of hsa_circ_0007919 in PDAC tissues (Fig. 4 G-I). The mRNA and protein expression levels of LIG1 were decreased after hsa_circ_0007919 silencing and increased when hsa_circ_0007919 was overexpressed (Fig. 4 J-M). These results revealed that hsa_circ_0007919 induces LIG1 expression to activate DNA damage repair pathways and enhance resistance to GEM in PDAC cells.
LIG1 reversed the effects of hsa_circ_0007919 on cell proliferation, apoptosis and DNA damage
To confirm that LIG1 is the downstream target of hsa_circ_0007919, we investigated the role of LIG1 in GEM-resistant PDAC cells and found that silencing LIG1 resulted in decreased proliferation and increased apoptosis and DNA damage (Fig. 5 A-G, S3A-B). Moreover, we further overexpressed LIG1 in PANC-1/GEM and PANC-1/GEM cells with stable hsa_circ_0007919 silencing (Fig. S3 C-D) and found that LIG1 overexpression reversed cell proliferation, apoptosis and DNA damage affected by hsa_circ_0007919 silencing (Fig. 5 H-N, S3E). These results revealed that hsa_circ_0007919 increases LIG1 expression to promote cell proliferation and reduce apoptosis and DNA damage.
hsa_circ_0007919 binds to FOXA1 and TET1 to promote LIG1 transcription
To investigate how hsa_circ_0007919 increases the expression of LIG1, we performed FISH and nuclear-cytoplasmic RNA fractionation assays, and the results showed that hsa_circ_0007919 was mainly distributed in the nucleus (Fig. 6 A-B). We determined the overlap between the proteins that bind to hsa_circ_0007919 and the proteins that bind to LIG1 mRNA using the circAtlas 2.0 ( http://circatlas.biols.ac.cn/ ) and ENCORI ( http://starbase.sysu.edu.cn/ ) databases but could not identify any overlapping proteins (Fig. S4 A). Then, we determined the overlap between the proteins that bind to hsa_circ_0007919 and those that bind to the promoter of LIG1 using the circAtlas 2.0 and SPP ( https://www.signalingpathways.org ) databases, and FOXA1 was identified as the protein with the most significant overlap (Fig. 6 C). We further predicted that there may be other proteins that function together with FOXA1 and identified TET1, which binds to FOXA1, using the STRING database ( https://cn.string-db.org/ ) (Fig. 6 D). Since FOXA1 functions as a transcriptional promoter in multiple kinds of cancers and TET1 functions as a DNA methylhydroxylase to decrease the methylation level of various gene promoters and enhance their transcription [ 20 , 21 ], we predicted that hsa_circ_0007919 binds to FOXA1 and TET1 to promote the transcription of LIG1. We first silenced FOXA1 and TET1 and found that the expression of LIG1 was decreased (Fig. 6 E-H, S4B-C), and the results of the co-IP assay confirmed the interaction between FOXA1 and TET1 in GEM-resistant cells (Fig. 6 I-J). At the same time, we silenced TET1 in FOXA1-silenced GEM-resistant cells and found that TET1 could enhanced the inhibition ability of FOXA1-silencing on LIG1, while overexpressing TET1 could partly reverse the inhibition ability of FOXA1-silencing on LIG1, which indicated that FOXA1 and TET1 play the synergistic effect in the regulation of LIG1 (Fig. 6 K-L). Then, we performed a ChIRP assay and found that hsa_circ_0007919 could bind to FOXA1 and TET1 (Fig. 6 M-N). Furthermore, we used a RIP assay to confirm that FOXA1 and TET1 can interact with hsa_circ_0007919, and this interaction was enhanced in GEM-resistant cells (Fig. 6 O-P).
To investigate the interaction among FOXA1, TET1 and the LIG1 promoter, we analyzed the binding site between FOXA1 and the LIG1 promoter using the JASPAR database ( https://jaspar.genereg.net/ ) and predicted the CpG islands in the LIG1 promoter using the MethPrimer 2.0 database ( http://www.urogene.org/methprimer2/ ). Among the 6 sites identified by JASPAR and the 4 predicted CpG islands, we found that site 6 in the LIG1 promoter was the most enriched and thus chose the − 1411 to -1273 region (P1) for further research (Fig. 7 A-B, S4D-E), and we found that treatment with 5-AzaC increased LIG1 expression in GEM-resistant cells (Fig. 7 C). The results of the ChIP assay revealed that FOXA1 and TET1 bind to the LIG1 promoter region P1, and hsa_circ_0007919 inhibition decreased this binding capacity (Fig. 7 D). The MS-PCR assay results showed that silencing hsa_circ_0007919 or TET1 increased the DNA methylation level in the LIG1 promoter (Fig. 7 F), while overexpressing hsa_circ_0007919 or TET1 had the opposite effect (Fig. 7 E and G). Furthermore, we performed a luciferase reporter assay and found that silencing hsa_circ_0007919, FOXA1 or TET1 decreased the transcriptional activity of the LIG1 promoter, while overexpressing hsa_circ_0007919, FOXA1 or TET1 enhanced LIG1 transcriptional activity (Fig. 7 H-I). These results revealed that hsa_circ_0007919 enhances the transcription of LIG1 by binding to FOXA1 and TET1.
Gemcitabine induces hsa_circ_0007919 expression through enhancing QKI-mediated back-splicing
CircRNAs are generated by back-splicing of exons or introns of their host genes, hsa_circ_0007919 is formed by the circularization of ABR exons 3–16 (Fig. 1 D), and hsa_circ_0007919 expression was found to be upregulated in GEM-resistant PDAC tissues and cells compared with normal PDAC tissues and cells (Figs. 1 B and 2 E). Studies have revealed that multiple proteins are involved in the process of back-splicing during circRNA synthesis. Among several well-recognized regulators, QKI and FUS were reported to enhance the formation of circRNAs, while ADAR1 was reported to exert the opposite effect [ 22 – 24 ]. To investigate the reason for high hsa_circ_0007919 expression, we analyzed the correlations between the expression of abovementioned proteins and that of hsa_circ_0007919 in GEM-resistant PDAC tissues and found that QKI was positively correlated with the expression of hsa_circ_0007919, while FUS had a lower correlation and ADAR1 was negatively correlated with hsa_circ_0007919 expression (Fig. 8 A, S5A-B). Therefore, we predicted that QKI could promote the formation of hsa_circ_0007919. We silenced QKI in GEM-resistant PDAC cells and found that the expression of hsa_circ_0007919 was downregulated but the expression of the hsa_circ_0007919 host gene ABR was unaffected (Fig. 8 B-C, S5C-D); moreover, the expression of QKI showed no difference between normal PDAC cells and GEM-resistant PDAC cells (Fig. 8 D-E). QKI was reported to interact with introns flanking circRNA-formed exons in its pre-mRNA. We designed primers of ABR introns 2 and 16 and found that QKI could bind to both of these introns in PDAC cells and that this interaction was enhanced in GEM-resistant PDAC cells (Fig. 8 F-G). These results revealed that GEM promotes the formation of hsa_circ_0007919 by enhancing the interaction between QKI and hsa_circ_0007919-flanking introns to promote hsa_circ_0007919 back-splicing and circularization.
In summary, this study delineates the mechanisms by which GEM enhances QKI-mediated hsa_circ_0007919 splicing and circularization and hsa_circ_0007919 recruits FOXA1 and TET1 to modulate LIG1 transcription and DNA damage repair pathways, which contribute to resistance to GEM-induced DNA damage and apoptosis in PDAC cells (Fig. 8 H). | Discussion
PDAC is one of the most aggressive and deadly malignancies and is expected to become the second leading cause of cancer-related death within a decade [ 2 ]. Although molecular mechanistic research and treatment methods for PDAC have progressed in recent decades, the 5-year survival rate of PDAC is still the lowest among all malignant tumors due to reasons including chemoresistance [ 25 ]. CircRNAs are a class of noncoding RNAs and have been identified to be involved in multiple steps in tumor development, PTK2 exon-derived hsa_circ_0005273 promotes the proliferation and metastasis of BC cells by binding to miR-200a-3p to upregulate YAP1 expression and inhibit the Hippo pathway [ 26 ], and the interaction of circ-GALNT16 with p53 is enhanced to inhibit the proliferation and metastasis of colorectal cancer cells via the inhibition of Senp2-mediated hnRNPK desumoylation [ 27 ]. We mainly focused on the relationships between circRNAs and GEM resistance in PDAC. Three GEM-resistant PDAC and three GEM-sensitive PDAC tissues were collected from clinical surgical specimens for analysis of circRNA expression levels with a circRNA chip, and hsa_circ_0007919 expression was found to be significantly increased in GEM-resistant PDAC tissues. Hsa_circ_0007919 is located on chr17:953289–1,003,975, with a length of 1545 bp. We found that hsa_circ_0007919 was highly expressed in GEM-resistant PDAC tissues and cells, and hsa_circ_0007919 inhibited apoptosis and DNA damage induced by GEM treatment. It has been shown that hsa_circ_0007919 is involved in the progression of ulcerative colitis and tuberculosis [ 28 , 29 ], and the current study suggests that hsa_circ_0007919 plays an important role in GEM resistance in PDAC.
The predominant cancer treatments, other than surgery, are radiation and chemotherapy, which act by inducing DNA damage [ 30 ]. GEM is a common drug used in clinical chemotherapy for PDAC and usually acts by inducing SSBs [ 8 ], and a DSB can be formed when two SSBs are located near each other or when the DNA replication apparatus encounters SSBs, while DSBs are difficult to repair and extremely toxic [ 31 ]. To combat the hazard posed by DNA damage, cancer cells have evolved mechanisms called DDR pathways to facilitate DNA damage repair [ 32 ]. Among the components of these pathways, LIG1, a DNA ligase, completes the repair of almost all types of DNA damage by religating the broken phosphodiester skeleton in DSBs [ 33 ]. In addition, genetic deletion or low expression of LIG1 was found to be associated with selective carboplatin resistance in preclinical models of triple-negative breast cancer (TNBC)[ 34 ]. LIG1 deletion in ovarian cancer (OC) cells increased platinum cytotoxicity, which was associated with the accumulation of DSBs, S-phase arrest and increased proportions of apoptotic cells [ 19 ]. We performed RNA-seq analysis in control and hsa_circ_0007919-silenced GEM-resistant PDAC cells, and KEGG enrichment analysis and GSEA were performed on the identified differentially expressed genes. The results indicated that base excision repair, mismatch repair and nucleotide excision repair were the top-ranked enriched pathways, and LIG1 was enriched in all three DNA damage repair pathways. Here, we found that silencing hsa_circ_0007919 decreased LIG1 expression and inhibited LIG1-mediated multiple DNA damage repair pathways to develop resistant to GEM in GEM-resistant PDAC cells. These results indicate that hsa_circ_0007919 could be a potential therapeutic target for GEM-resistant PDAC treatment.
FOXA1 is a member of the Forkhead Box protein family that is involved in cell growth and differentiation and is also a DNA binding protein involved in transcription and DNA repair [ 35 ]. Many members of the Forkhead Box protein family are associated with pancreatic metabolism and differentiation and the development of pancreatic cancer (PC), FOXO1 inhibition can mimic β-cell differentiation by downregulating β-cell-specific transcription and lead to abnormal expression of progenitor genes and the α-cell marker glucagon [ 36 ]. FOXD1 directly promotes the transcription of SLC2A1 and inhibits the degradation of SLC2A1 through the RNA-induced silencing complex, thus promoting aerobic glycolysis in PC cells and enhancing their proliferation and metastasis [ 37 ]. Meanwhile, FOXA1 was reported to be associated with multiple kinds of cancers, especially prostate cancer (PCa) and BC. FOXA1 contributes to the activation of androgen receptor (AR) signaling that drives the growth and survival of PCa cells through direct interaction with AR and also has an AR-independent role in regulating epithelial-mesenchymal transition (EMT)[ 38 ], FOXA1 binds to the DNA-binding domain of STAT2 and inhibits STAT2 DNA-binding activity, IFN signaling gene expression and the tumor immune response in PCa and BC [ 39 ]. In addition to the binding of transcription factors to DNA promoter regions, methylation of DNA promoter regions also plays an important role in gene expression regulation, with hypermethylation of most gene promoter regions leading to reduced transcription levels [ 40 ]. TET1, a DNA demethylase, maintains genomic methylation homeostasis and accomplishes epigenetic regulation, which affect stem cells, immune responses and various malignant tumors [ 41 ]. TET1 promotes the transcription of CHL1 by binding to and demethylating the CHL1 promoter, thereby inhibiting the Hedgehog pathway, inhibiting EMT and sensitizing PDAC cells to 5-FU and GEM [ 42 ]. However, the role of FOXA1 and TET1 on GEM resistance in PDAC remains unknown. Here, we found that FOXA1 and TET1 can both bind to the promoter of LIG1 and that TET1 mediates demethylation of the LIG1 promoter and enhances FOXA1-mediated transcription of LIG1. It has been reported that FOXA2, a transcription factor precursor, was required for the regulation of pancreatic endoderm development, and TET1 deletion results in significant changes in FOXA2 binding in pancreatic progenitor cells. Loci with reduced FOXA2 binding have a low level of active chromatin modification and enrichment of bHLH motifs, resulting in functional β-cell defects [ 43 ]. In this study, we also confirmed that TET1 could increase the transcriptional activity of FOXA1 in GEM-resistant PDAC cells, which similar to the interaction between TET1 and FOXA2. These results enriched the further understanding of the interaction between DNA demethylase and transcription factors and the synergistic effect of transcriptional regulation.
CircRNAs are generated by back-splicing of pre-mRNAs produced by transcription of host genes, and cis-regulatory elements, trans-acting factors, RNA binding proteins (RBPs) and other related molecules can regulate the splicing and circularization of circRNAs [ 44 ]. Among these regulatory factors, QKI belongs to the STAR family containing KH domain RNA-binding proteins and has been found to affect pre-mRNA splicing, and QKI binds up- and down-stream of the circRNA-forming exons in SMARCA5 to promote circRNA formation [ 45 ]. FUS is a member of the FET protein family and is reported to be a regulator of circRNA biogenesis; CircROBO1 upregulates KLF5 by sponging miR-217-5p, enabling KLF5 to activate FUS transcription and promote circROBO1 back-splicing, forming a positive feedback loop to enhance BC-derived liver metastasis [ 46 ]. ADAR1 is a member of the ADAR enzyme family that facilitates A-to-I editing of RNAs, circNEIL3 can inhibit ADAR1 expression by inducing GLI1 RNA editing through sponging miR-432-5p, and ADAR1 inhibition increases circNEIL3 expression to promote EMT and cell cycle progression in PDAC [ 47 ]. We found that GEM treatment enhances the interaction between QKI and introns 2 and 16 of ABR pre-mRNA to promote the splicing and circularization of hsa_circ_0007919. These results suggest that GEM treatment situation could enhance the function of QKI without changing its expression, which indicates a critical adaptation mechanism to developing resistance to GEM or other chemotherapy agents.
Taken together, our findings indicate that hsa_circ_0007919 can promote DNA damage repair to confront GEM treatment. Mechanistically, hsa_circ_0007919 recruits FOXA1 and TET1 to promote LIG1 transcription and activates the base excision repair, mismatch repair and nucleotide excision repair pathways to ameliorate the DNA damage and suppress the apoptosis induced by GEM. Furthermore, GEM treatment-enhanced interaction between QKI and ABR pre-mRNA led to increased biogenesis of hsa_circ_0007919 in a back-splicing-dependent manner. Our findings could be helpful for understanding the mechanism of GEM resistance and developing therapeutic strategies for chemotherapy-resistant PDAC. | Background
Circular RNAs (circRNAs) play important roles in the occurrence and development of cancer and chemoresistance. DNA damage repair contributes to the proliferation of cancer cells and resistance to chemotherapy-induced apoptosis. However, the role of circRNAs in the regulation of DNA damage repair needs clarification.
Methods
RNA sequencing analysis was applied to identify the differentially expressed circRNAs. qRT-PCR was conducted to confirm the expression of hsa_circ_0007919, and CCK-8, FCM, single-cell gel electrophoresis and IF assays were used to analyze the proliferation, apoptosis and gemcitabine (GEM) resistance of pancreatic ductal adenocarcinoma (PDAC) cells. Xenograft model and IHC experiments were conducted to confirm the effects of hsa_circ_0007919 on tumor growth and DNA damage in vivo. RNA sequencing and GSEA were applied to confirm the downstream genes and pathways of hsa_circ_0007919. FISH and nuclear-cytoplasmic RNA fractionation experiments were conducted to identify the cellular localization of hsa_circ_0007919. ChIRP, RIP, Co-IP, ChIP, MS-PCR and luciferase reporter assays were conducted to confirm the interaction among hsa_circ_0007919, FOXA1, TET1 and the LIG1 promoter.
Results
We identified a highly expressed circRNA, hsa_circ_0007919, in GEM-resistant PDAC tissues and cells. High expression of hsa_circ_0007919 correlates with poor overall survival (OS) and disease-free survival (DFS) of PDAC patients. Hsa_circ_0007919 inhibits the DNA damage, accumulation of DNA breaks and apoptosis induced by GEM in a LIG1-dependent manner to maintain cell survival. Mechanistically, hsa_circ_0007919 recruits FOXA1 and TET1 to decrease the methylation of the LIG1 promoter and increase its transcription, further promoting base excision repair, mismatch repair and nucleotide excision repair. At last, we found that GEM enhanced the binding of QKI to the introns of hsa_circ_0007919 pre-mRNA and the splicing and circularization of this pre-mRNA to generate hsa_circ_0007919.
Conclusions
Hsa_circ_0007919 promotes GEM resistance by enhancing DNA damage repair in a LIG1-dependent manner to maintain cell survival. Targeting hsa_circ_0007919 and DNA damage repair pathways could be a therapeutic strategy for PDAC.
Supplementary Information
The online version contains supplementary material available at 10.1186/s12943-023-01887-8.
Keywords | Electronic supplementary material
Below is the link to the electronic supplementary material.
| Abbreviations
Circular RNA
Gemcitabine
Pancreatic ductal adenocarcinoma
Overall survival
Disease-free survival
DNA damage response
5-fluorouracil
Breast cancer
Fetal Bovine Serum
Immunofluorescence
Bovine Serum Albumin
Immunohistochemical
Gene set enrichment analysis
Fluorescence in situ hybridization
Chromatin Isolation by RNA Purification
Co-immunoprecipitation
RNA binding protein immunoprecipitation
Chromatin immunoprecipitation
Methylation-specific PCR
Standard deviation
Single-strand break
Double-strand break
Triple-negative breast cancer
Ovarian cancer
Pancreatic cancer
Prostate cancer
Androgen receptors
Epithelial-interstitial transition
RNA binding protein
Acknowledgements
We thank BGI for providing RNA-seq and data analysis, and we also thank AJE for language editing.
Authors’ contributions
XL, MX and ZXZ: conceptualization, methodology, investigation, writing original draft, visualization. GS, WN, ZC and ZY: data curation, validation, investigation, formal analysis. ZP, FXY, GJX and ZMM: software, investigation. RZQ: project administration, supervision. ZPB: supervision, resources, writing review & editing. All authors reviewed the manuscript.
Funding
This research was supported by Science and Technology Project of Xuzhou Municipal Health Commission (XWKYHT20220152) and Xuzhou Medical University Affiliated Hospital Development Fund (XYFY2021017).
Data availability
The data generated in this study are available upon request from the corresponding author.
Declarations
Ethics approval and consent to participate
This study was performed according to the ethical standards of Declaration of Helsinki and was approved by the ethics committee of the Xuzhou Medical University. All animal experiments were approved by the Animal Care Committee of Xuzhou Medical University.
Consent for publication
We have obtained consents to publish this paper from all the participants of this study.
Conflict of interest
The authors declare no potential conflicts of interest. | CC BY | no | 2024-01-16 23:36:45 | Mol Cancer. 2023 Dec 4; 22:195 | oa_package/60/59/PMC10694898.tar.gz |
|
PMC10696062 | 38049453 | Introduction
As of September 23rd, 2023, over 70 million SARS-CoV-2 diagnostic tests have been performed in the United States 1 . Deploying individual testing programs at this scale is extraordinarily expensive and resource-intensive, and has become increasingly unsustainable in most jurisdictions. New strategies are needed for monitoring the spread and evolution of pathogens without relying on this widespread individual testing.
Environmental surveillance, be it through wastewater or air, shows promise for meeting this need. Without relying on individual testing, environmental surveillance has already enabled public health officials to rapidly assess infection risk in congregate settings and in communities writ large 2 – 5 . In the COVID-19 pandemic, the majority of environmental sampling for surveillance purposes has been through wastewater. However, air sampling has several advantages that make it complementary or even preferable to wastewater in certain settings. For example, air samplers are portable and intrinsically hyperlocal; they can be moved between individual rooms, installed into HVAC systems, and deployed densely in more open areas like airport terminals. Additionally, wastewater sampling may be unfeasible in some areas, such as rural areas where wastewater is disposed of in septic systems. Perhaps most importantly, many human viruses transmit predominantly through aerosols or respiratory droplets, which is exactly what air samplers are designed to collect. Air sampling makes it possible to identify a wide variety of aerosolized viruses while they are in the process of potentially spreading between hosts 6 – 9 .
However, few studies or real-world air sampler deployments have taken advantage of this bioaerosol diversity. Doing so would require the development of virus-agnostic, metagenomic detection meth-ods, which, when combined with air sampling, could expand surveillance to any airborne virus. Early contributions in this area, e.g. 10 – 14 , demonstrate that it is possible to detect human viruses in various settings. However, this detection comes with a variety of technical challenges 15 . First, the relative abundance of aerosolized human viruses in the above studies’ air samplers was extremely low. For example, Prussin et al. 11 characterized airborne viral communities in a daycare center's HVAC system over one year. Over that year, commonly circulating human viruses accounted for less than0.005% of the total genetic material, with the majority of total virus sequences coming from bacteriophages and plant-associated viruses. A second challenge is defining which portion of each viral genome to use for classification. Bacteria and fungi have universal genetic marker regions (16S and internal transcribed spacer (ITS) ribosomal RNA, respectively) that are used for sequencing and classification. In contrast, there is no single genetic marker shared across the many viruses that could be present in the air. This leaves unbiased amplification of human virus genetic material as the best option for detecting many, potentially underappreciated viruses in the air 16 . One especially promising method is sequence-independent single-primer amplification (SISPA) 17 , which has been used to detect a wide range of viruses in clinical samples 18 .
In 2021, we reported on the characterization of SARS-CoV-2 and other respiratory viruses in air samples collected from congregate settings 19 . We also used a semi-quantitative PCR assay in that study to detect 40 other pathogens, demonstrating that air samples could be used to monitor the variety of pathogens that may be present in the spaces around us. Here, we use SISPA to detect an even broader array of human RNA viruses from air collected in congregate settings. Understanding human viruses in built environments' air may help elucidate illness trends in communities over time. This approach could enhance air sampling as a tool for public health virus surveillance and preparedness against many emerging and re-emerging viruses. | Methods
Ethics statement
Our study does not evaluate the effectiveness of air samplers for diagnosing individuals for COVID-19 or other illnesses, nor does it collect samples directly from individuals. Therefore, it does not constitute human subjects research.
Air sample collection and processing
AerosolSense instruments (Thermo Fisher Scientific) were installed in a variety of indoor congregate settings to collect bioaerosols for pathogen surveillance from December 2021 to December 2023. AerosolSense instruments were placed on flat surfaces 1–1.5 m off the ground in high-traffic areas of an athletics training facility, preschool, emergency housing facility, brewery taproom, and five K-12 schools in the Upper Midwestern States of Wisconsin and Minnesota. Air samples were collected using AerosolSense cartridges (Thermo Fisher Scientific) according to the manufacturer’s instructions. The iOS and Android Askidd mobile app was used to collect air cartridge metadata and upload it to a centralized Labkey database, as previously described in Ramuta et al. 19 . After the air samples were removed from the instruments, they were transferred to the lab for further processing. Two air sample substrates were removed from each of the AerosolSense cartridges using sterile forceps to place them in two separate 1.5 mL tubes containing 500 μL of PBS. The tubes were vortexed for 20 s, centrifuged for 30 s, and stored at − 80 °C until RNA extraction and complementary DNA (cDNA) preparation.
Air sample total nucleic acid extraction and concentration
Total nucleic acids were extracted from air samples using the Maxwell 48 Viral Total Nucleic Acid Purification Kit (Promega) according to the manufacturer's recommendations. Briefly, 300 μL of air sample eluate was added to a 1.5 μL tube containing 300 μL of lysis buffer and 30 μL of Proteinase K. An unused air cartridge was processed with each Maxwell run to be used as a no-template control. The reaction mix was vortexed for 10 s and incubated at 56 °C for 10 min. Following the incubation, the tubes were centrifuged for 1 min. Then, 630 μL of the reaction mix was added to the Maxwell 48 cartridges, which were loaded into a Maxwell 48 instrument and processed with the Viral Total Nucleic Acid program. Nucleic acids were eluted in a final volume of 50 μL of nuclease-free water. To clean and concentrate the viral RNA, 30 μL of extracted total nucleic acids were treated with TURBO DNase (Thermo Fisher Scientific) and concentrated to 10 μL with the RNA Clean and Concentrator-5 kit (Zymo Research) according to the manufacturer’s protocols.
Air sample sequencing
A modified sequence-independent single primer amplification (SISPA) approach previously described by Kafetzopoulou et al. was used to generate cDNA from the air samples 18 , 41 . First, 1 μL of Primer A (Table 2 ) was added to 4 μL of concentrated viral RNA and incubated in a thermocycler at 65 °C for 5 min, followed by 4 °C for 5 min. To perform reverse transcription, 5 μL of SuperscriptTM IV (SSIV) First-Strand Synthesis System (Invitrogen) master mix (1 μL of dNTP (10 mM), 1 μL of nuclease-free water, 0.5 μL of DTT (0.1 M), 2 μL of 5X RT buffer, and 0.5 μL of SSIV RT) was added to the reaction mix and incubated in a thermocycler at 42 °C for 10 min. To perform second-strand cDNA synthesis, 5 μL of Sequenase Version 2.0 DNA polymerase (Thermo Fisher Scientific) master mix (3.85 μL of nuclease-free water, 1 μL of 5X Sequenase reaction buffer, and 0.15 μL of Sequence enzyme) was added to the reaction mix and incubated at 37 °C for 8 min. After the incubation, 0.45 μL of the Sequenase dilution buffer and 0.15 μL of Sequenase were added to the reaction mix and incubated at 37 °C for 8 min. To amplify the randomly primed cDNA, 5 μL of the cDNA was added to 45 μL of the Primer B reaction mix [5 μL of AccuTaq LA 10 × buffer, 2.5 μL of dNTP (10 mM), 1 μL of DMSO, 0.5 μL of AccuTaq LA DNA polymerase, 35 μL of nuclease-free water, and 1 μL of Primer B (Table 2 )]. The following thermocycler conditions were used to amplify the cDNA: 98 °C for 30 s, 30 cycles (94 °C for 15 s, 50 °C for 20 s, and 68 °C for 2 min), and 68 °C for 10 min. The amplified PCR product was purified using a 1:1 ratio of AMPure XP beads (Beckman Coulter) and eluted in 25 μL of nuclease-free water. The purified PCR products were quantified with the Qubit dsDNA high-sensitivity kit (Invitrogen).
Oxford nanopore sequencing
SISPA-prepared cDNA were submitted to the University of Wisconsin-Madison Biotechnology Center for sequencing on the Oxford Nanopore PromethION. Upon arrival, the PCR product concentrations were confirmed with the Qubit dsDNA high-sensitivity kit (Invitrogen). Libraries were prepared with up to 100–200 fmol of cDNA according to the Oxford Nanopore ligation-based sequencing kit SQK-LSK109 and Native Barcoding kit EXP-NBD196. The quality of the finished libraries was assessed using an Agilent Tapestation (Agilent) and quantified again using the Qubit® dsDNA HS Assay Kit (Invitrogen). Samples were pooled and sequenced with an FLO-PRO002 (R9.4.1) flow cell on the Oxford Nanopore PromethION 24 for 72 h. Data were basecalled using Oxford Nanopore’s Guppy software package (6.4.6) with the high accuracy basecalling model (read filtering parameters: minimum length 200 bp, minimum Qscore = 9). Air sample AE0000100A8B3C was also sequenced on the Oxford Nanopore GridION to obtain a greater depth of coverage across all seven influenza C virus gene segments. A sequencing library was prepared for AE0000100A8B3C according to the Oxford Nanopore ligation-based sequencing kit SQK-LSK110 instructions. The sample was sequenced with an FLO-MIN106 (R9.4) flow cell on the Oxford Nanopore GridION for 72 h. Data were basecalled using Oxford Nanopore’s Guppy software package (6.4.6) with the high accuracy basecalling model (read filtering parameters: minimum length 20 bp, minimum Qscore = 9).
Sequencing analysis
Sequencing data generated from air samples were deposited in the Sequence Read Archive (SRA) under bioproject PRJNA950127. The removal of host reads was requested at the time of SRA submission using the Human Read Removal Tool (HRRT). The sequencing data were analyzed using a custom workflow. To ensure reproducibility and portability, we implemented the workflow in NextFlow and containerized all software dependencies with Docker. All workflow code and replication instructions are publicly available at ( https://github.com/dholab/air-metagenomics ). Briefly, the workflow starts by automatically pulling the study fastq files from SRA, though it has the option of merging locally stored demultiplexed fastq files as well. Then, reads are filtered to a minimum length (200 bp) and quality score (Qscore = 9), and adapters and barcodes are trimmed from the ends of the reads, all with the reformat.sh script in bbmap (39.01-0). The filtered fastq files for each air sample are then mapped to contaminant FASTA files containing common contaminants with minimap2 (v2.22). Reads that do not map to the contaminant FASTA files are retained and mapped to their sequencing run’s negative control reads to further remove contaminants present from library preparation. The cleaned fastq files for each air sample are then mapped to a RefSeq file containing human viruses downloaded from NCBI Virus using minimap2 (v2.22). The human virus reference file contains 835 viral genome sequences and was processed using the bbmask.sh command in bbmap (39.01-0) with default parameters to prevent false-positive mapping to repetitive regions in viral genomes. SAM files for each sample are converted to BAM format, again with reformat.sh. The workflow then completes by generating a pivot table of pathogen “hits,” which lists the number of reads supporting each mapped pathogen for each sample. For this study, we then imported the BAM alignments into Geneious Prime (2023.0.4) to inspect the mapping results visually. Genome coverage plots were created for several respiratory and enteric viruses detected in air samples using ggplot2 (3.4.1) with a custom R script (4.2.3) in RStudio (2023.03.0+386).
Phylogenetic analysis
To compare the influenza C virus detected in the preschool air sample AE0000100A8B3C we downloaded 45 influenza C virus genome sequences for each of the seven gene segments from Genbank (HE, PB2, PB1, P3, NP, M, and NS). Accession numbers for each segment can be found in supplementary data 1. Consensus sequences were generated from AE0000100A8B3C with a minimum coverage of 20X. Sections with low coverage were masked with N and trimmed to the reference sequence length. Next, each set of influenza C virus gene segment sequences was aligned using MUSCLE (5.1) implemented in Geneious Prime (2023.0.4) with the PPP algorithm. We then used the Geneious Tree Builder (2023.0.4) to construct a phylogeny for each gene segment using the Neighbor-joining method and Tamura-Nei model with 100 bootstrapped replicates.
SARS-CoV-2 RT-PCR
Air samples collected between December 2021 and May 2022 were tested for SARS-CoV-2 viral RNA using three different SARS-CoV-2 RT-PCR assays depending on their collection location as previously described 19 . Air samples collected after May 2022 were tested for SARS-CoV-2 viral RNA using an RT-PCR protocol as previously described 42 . Briefly, viral RNA was isolated from the air sample substrate using 300 μL of eluate and the Viral Total Nucleic Acid kit for the Maxwell 48 instrument (Promega), following the manufacturer’s instructions. RNA was eluted in 50 μL of nuclease-free water. Reverse transcription qPCR was performed using primer and probes from an assay developed by the Centers for Disease Control and Prevention to detect SARS-CoV-2 (N1 and N2 targets). The 20 μL reaction mix contained 5 μL of 4 × TaqMan Fast Virus 1-Step Master Mix, 1.5 μL of N1 or N2 primer/probe mix (IDT), 5 μL of sample RNA, and 8.5 μL of nuclease-free water. The RT-PCR amplification was run on a LightCycler 96 at the following conditions: 37 °C for 2 min, 50 °C for 15 min, 95 °C for 2 min, 50 cycles of 95 °C for 3 s and 55 °C for 30 s, and final cool down at 37 °C for 30 s. The data were analyzed in the LightCycler 96 software 1.1 using absolute quantification analysis. Air samples were called positive when N1 and N2 targets both had cycle threshold (Ct) values < 40, inconclusive when only one target had Ct < 40, and negative if both targets had Ct > 40. | Results
Study design
From July 2021 to December 2022, we deployed active air samplers in several community settings in the Upper Midwestern states of Wisconsin and Minnesota for routine pathogen monitoring. Thermo Fisher AerosolSense Samplers were used to collect air samples from high-traffic areas in several different congregate settings, including a preschool, campus athletic facility, emergency housing facility, brewery taproom, household, and five K-12 schools. Air samples were collected at weekly and twice-weekly intervals as previously described 19 . To demonstrate the feasibility of pathogen-agnostic sequencing to detect human viruses captured in air samples in real-world settings, we analyzed a total of 22 air samples across the 10 congregate settings (Table 1 ). We also processed three air sample filter substrates from unused AerosolSense cartridges, as no-template controls. Viral RNA was extracted from air samples, and complementary DNA (cDNA) was prepared using sequence-independent single primer amplification (SISPA) for Oxford Nanopore deep sequencing and metagenomic analysis. Sequencing reads were filtered for host and reagent contaminants and mapped to 835 human-associated viral reference sequences from NCBI to look for common circulating RNA and DNA viruses (available on GitHub at https://github.com/dholab/air-metagenomics/blob/main/resources/ncbi_human_virus_refseq_20221011.masked.fasta ).
Detection of human respiratory and enteric viruses
Deep sequencing allowed us to detect human viruses in 19 out of 22 (86%) air samples. No human viruses were detected in any of the no-template controls. We define a detection of a virus “hit” as two or more reads mapping to the viral reference sequence in two or more non-overlapping genomic regions. By this definition, we detected a total of 13 human RNA viruses in air samples (Table 1 ). Several of these viruses are associated with frequent and seasonal respiratory illnesses that cause a burden on the healthcare system, including influenza virus type A and C, respiratory syncytial virus subtypes A and B, human coronaviruses (NL63, HKU1, and 229E), rhinovirus, and SARS-CoV-2 (Fig. 1 ). We also detected human viruses associated with enteric disease, including rotavirus, human astrovirus, and mamastrovirus, in ten of the 22 (45%) air samples in this study.
Characterizing influenza C virus lineage in a preschool air sample
A preschool air sample from the week of February 1st showed comprehensive coverage of influenza C virus (ICV), with reads mapping to all seven gene segments (supplementary data 1). This included hemagglutinin-esterase (HE), each of the genes encoding proteins for the polymerase complex (PB2, PB1, and P3), nucleoprotein (NP), matrix (M), and nonstructural protein (NS) (supplementary data 1). ICV is an understudied respiratory virus, with a total of 2475 ICV sequences available in NCBI Genbank (taxid:11552) and only 134 ICV sequences submitted from the United States in the 21st century. To contribute more data on this understudied virus, we used the remaining SISPA-prepared cDNA from the February 1st preschool sample to perform an additional sequencing run with the Oxford Nanopore GridION. This increased the sample’s depth of coverage across the ICV genome compared to the initial Oxford Nanopore PromethION run, where the flow cell was shared with many samples. We then used the GridION reads to create consensus sequences for each gene segment, which enabled us to perform a Tamura-Nei neighbor-joining phylogenetic analysis that compared the February 1st preschool ICV with 45 other ICV viruses from GenBank (supplementary data 1). Our phylogeny supported six genetic lineages for the HE gene and two for all other gene segments (Fig. 2 ; supplementary figure 1). HE grouped with the C/Kanagawa/1/76 lineage. PB2, PB1, M, and NS grouped with the C/Yamagata/81 lineage. P3 and NP group with C/Mississippi/80 lineage. This particular reassortment clusters closely with the influenza C virus C/Scotland/7382/2007, which was previously identified by Smith et al. (Fig. 2 ; supplementary figure 1) 20 .
Longitudinal detection of human viruses in a preschool
Metagenomic analysis of air samples longitudinally collected from congregate settings can provide insight into changes in the prevalence of pathogens over time. These data could provide public health authorities valuable information to improve routine pathogen surveillance programs and outbreak investigations. To track the prevalence of viral genetic material from ICV and other human viruses in a preschool, we analyzed four air samples that were longitudinally collected from January 5, 2022, to March 1, 2022. ICV was first detected in an air sample collected from January 26th to February 1st, 2022. Viral reads in this sample mapped to three out of the seven gene segments including HE, PB2, and NS. Two air samples collected after February 1st, 2022, also contained reads that mapped to several ICV gene segments. ICV genetic material was detected at the highest abundance in air sample AE0000100A8B3C collected from February 1st to the 8th, with reads mapping to all seven gene segments as described above (Table 1 ; supplementary data 1). Viral reads mapping to five of the seven gene segments, including PB2, PB1, P3, HE, and NP, were detected in an air sample collected from February 23rd to March 1st.
Detection of SARS-CoV-2 in RT-PCR-positive air samples
To explore whether metagenomic sequencing can detect a human virus that is known to be present in an air sample, we sequenced air samples with known SARS-CoV-2 status. Each AerosolSense cartridge comes with two filter substrates. One filter substrate from each air sample was tested by reverse transcription PCR (RT-PCR) to determine its SARS-CoV-2 status. The other substrate was eluted in 500ul of PBS and stored at − 80 °C until it was processed for sequencing. Several different RT-PCR assays were used on samples included in this study, depending on when and where they were collected, as previously described 19 . Cut-off values used for determining if an air sample was positive, inconclusive, or negative for SARS-CoV-2 are described in the “ Methods ” section.
SISPA sequencing was able to detect SARS-CoV-2 reads in two out of 15 (13.3%) air cartridges that were positive for SARS-CoV-2 by the more sensitive RT-PCR assays (Table 1 ; supplementary data 1). The percent of genome coverage varied between the two samples (6.5% and 46.7%). No SARS-CoV-2 reads were observed in any of the samples that were negative or inconclusive for SARS-CoV-2 by RT-PCR testing or with no template controls (supplementary data 1). An inconclusive result was defined as a sample with only amplification in one of the PCR targets. These data suggest that, unsurprisingly, SISPA sequencing is not as sensitive as RT-PCR for detecting viral genomic material captured in the air samples. | Discussion
In this study, we used air samples and metagenomic sequencing to detect human RNA viruses in a variety of congregate settings. Specifically, our results show that active air sampling, SISPA library preparation, long-read Oxford Nanopore sequencing, and metagenomic bioinformatics can be used to detect both common and lesser-known human viruses. Because air samplers collect bioaerosols produced when infected individuals breathe, sneeze, cough, or talk, the majority of detected viruses were respiratory. However, we also detected enteric viruses that are transmitted through the fecal–oral route 21 , 22 . Several studies have previously used air sampling to detect enteric viruses in congregate settings, including a daycare, a wastewater treatment facility, and a hospital 11 , 14 , 23 . The detection of respiratory viruses in wastewater, and enteric viruses in air samples, creates future opportunities for integration of clinical, wastewater, and air sampler data from the same geographical location to obtain a more comprehensive understanding of viral spread within communities.
The most unexpected virus we detected was Influenza C Virus (ICV) in a preschool. SISPA and Oxford Nanopore sequencing allowed us to classify the viral lineages of all seven gene segments of ICV. ICV is a lesser-studied influenza virus that is often excluded from routine respiratory pathogen surveillance programs, which highlights one important limitation of virus-specific surveillance. Despite previous studies having shown a high seroprevalence of ICV in children increasing in age, it suggests that this is a common yet under-ascertained cause of respiratory illness 24 – 26 , with an epidemiology that remains poorly known 26 , 27 . Our ICV results highlight the potential of using air sample networks to detect viruses that previously had limited awareness. ICV was historically very difficult to detect with cell culture techniques because it causes weak cytopathic effects 24 , which may lead to an underestimation of ICV prevalence. It will be interesting to see how much more often viruses like ICV are detected as virus-agnostic environmental surveillance becomes more prevalent.
A potential strength of regular air sample collection, processing, and analysis is characterizing outbreaks longitudinally. For example, we detected the same ICV in the same preschool at four instances, with viral read count rising and falling through time. One possible explanation for this pattern is that it reflects airborne viral RNA load rising and falling over the course of the source infection(s), though we do not have the data to assess host viral load itself. We detect a similar pattern in longitudinal pre-school air sampling for astrovirus, which showed an increase and trailing of sequencing reads between samples. These results suggest that air sampling can be used to characterize outbreaks in real time. In extended outbreaks, it could also be used to provide early sequencing results, which could then be used to design primers for more sensitive amplification of viruses in the air.
All said, we caution that our study does not include a baseline clinical knowledge of all viruses that were present in the settings we sampled. More study will be needed to understand the relationship between air-sampler-derived read counts and airborne viral loads, to determine when a viral load is too low to be detected by our methods, and to rigorously assess the sensitivity and specificity of this approach to pathogen monitoring. However, our SARS-CoV-2 detection results suggest that improving sensitivity may be the best place to start optimizations: while we never detected SARS-CoV-2 in the absence of known infections (no false positives), we failed to detect it in 85% of the cases where an air sample tested positive by a more sensitive qPCR assay.
Improvements in bioaerosol collection, nucleic acid library preparation, sequencing technology, and pathogen-agnostic bioinformatics could all significantly improve the detection of human pathogens with air samplers. While SISPA and Oxford Nanopore sequencing enabled us to detect portions of a variety of viral genomes, we inevitably missed additional viruses that were present in the air. Air samples contain high amounts of human, animal, and microbial ribosomal RNA (rRNA), likely associated with airborne microbes and host cells transported on dust particles 28 . Several studies have shown that rRNA depletion can improve the sensitivity of unbiased sequencing techniques for recovering human RNA viruses from different modalities 29 , and should be considered for use with air samples. Alternatively, probe-based target capture methods, where nucleic acids eluted from the air cartridge are only retained if they are a reverse complement of a probe sequence 30 , could be used to enrich viral target sequences. Of note, enriching for a predetermined panel of viruses would make this a multi-pathogen method, not a pathogen-agnostic method 29 . Even still, the line between multi-pathogen and pathogen-agnostic is increasingly blurred; commercial kits are available that contain probes for more than 3000 different viruses, including those with ssRNA, dsRNA, dsDNA, and ssDNA genomes. Truly pathogen-agnostic methods, such as SMART-9N, are also becoming available 31 . These kits have been used with several different sample types to detect common and uncommon viruses in human and animal specimens (nasal swabs and plasma), mosquitoes, and wastewater 32 – 35 . Ribosomal RNA depletion and target enrichment both show great promise for enabling air sample networks to screen for pathogens with low abundance in the air.
Rapidly evolving sequencing technology could also play a role in improving the efficacy of air sampler networks. Sequencing workflows need to be optimized to be high-throughput, cost effective, and have rapid result turnaround for widespread use with air surveillance programs. In this study we ran two Oxford Nanopore sequencing runs on the PromethION 24. The runs multiplexed 16 and 9 samples and had an output of 85 and 67 Gigabases per flow cell, respectively. The manufacturer estimates a maximal output of 290 Gigabases per flow cell when using newer sequencing kit chemistries 36 . Improving the sequencing yield to 200 Gigabases could allow for multiplexing up to 32 air samples, while maintaining an average of 6 million Gigabases per air sample. This could make sequencing more cost-effective while maintaining a similar per sample output obtained in this study. Additionally, Oxford Nanopore sequencing enables real-time processing of sequencing data. This could help decrease the turnaround time, as a stream of data will become available as soon as sequencing begins instead of after 72 h of sequencing has completed. This rapid result turnaround time could be beneficial during outbreak response, when real-time data is essential.
The COVID-19 pandemic sparked widespread interest in environmental surveillance strategies for improving pandemic preparedness and outbreak response. While wastewater surveillance has many benefits, air surveillance via active air samplers is more mobile, which makes it easy to quickly deploy air sampling networks in settings of interest such as health clinics, airplanes, ports of entry, public transit, farms, K-12 schools, long-term care facilities, emergency housing facilities, or any other setting where people from many places congregate 37 – 39 . Highlighting this potential application, Mellon et al. recently deployed AerosolSense samplers in an outpatient clinic for patients suspected of mpox infection to look for mpox virus in the air during the 2022 mpox public health emergency of international concern 40 . Given the portability and flexibility of deployment, air sampling and detection of viral nucleic acids from these samples could become a cornerstone of agile public health responses to viral outbreaks in the near future.
Recent advances in metagenomic sequencing technologies have increased efforts to study microbial communities in built environments. This study demonstrates that metagenomic sequencing approaches paired with air sampling can be used to detect human respiratory and enteric viruses of public health importance in real-world settings. With continual technological improvements and laboratory optimizations, the general framework put forth here could provide a rapid means of monitoring viruses without relying on test-seeking behavior or pathogen-specific assays. | Innovative methods for evaluating virus risk and spread, independent of test-seeking behavior, are needed to improve routine public health surveillance, outbreak response, and pandemic preparedness. Throughout the COVID-19 pandemic, environmental surveillance strategies, including wastewater and air sampling, have been used alongside widespread individual-based SARS-CoV-2 testing programs to provide population-level data. These environmental surveillance strategies have predominantly relied on pathogen-specific detection methods to monitor viruses through space and time. However, this provides a limited picture of the virome present in an environmental sample, leaving us blind to most circulating viruses. In this study, we explore whether pathogen-agnostic deep sequencing can expand the utility of air sampling to detect many human viruses. We show that sequence-independent single-primer amplification sequencing of nucleic acids from air samples can detect common and unexpected human respiratory and enteric viruses, including influenza virus type A and C, respiratory syncytial virus, human coronaviruses, rhinovirus, SARS-CoV-2, rotavirus, mamastrovirus, and astrovirus.
Subject terms | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-023-48352-6.
Acknowledgements
This work was made possible by financial support through the National Institutes of Health grant (AAL4371). M.D.R. is supported by the National Institute of Allergy and Infectious Diseases of the National Institutes of Health under Award Number T32AI55397. The author(s) thank the University of Wisconsin Biotechnology Center DNA Sequencing Facility (Research Resource Identifier—RRID:SCR_017759) for providing PromethION sequencing services. We would like to acknowledge Eli O’Connor’s work in developing the iOS and Android Askidd mobile app to help streamline air sample metadata collection. We would like to thank all of the participating congregate settings for their partnership during this study.
Author contributions
N.R.M. contributed to the formal analysis, investigation, methodology, writing—original draft preparation and writing as well as revision—review and editing. M.D.R. contributed to the conceptualization, data curation, formal analysis, investigation, methodology, project administration, visualization, writing—original draft preparation, writing—review and editing. D.H.O. and S.L.O. contributed to the conceptualization, project administration, writing—original draft preparation, and writing—review and editing. M.R.S., O.E.H., A.A., W.C.V., M.J.B., and J.R.R. contributed to data curation, logistics, organization, and writing—review and editing. L.J.B. and M.T.A. contributed to data curation, resources, project management, and writing—review and editing. S.F.B., S.W., M.L., and M.M. contributed to logistics, organization, and writing—review and editing.
Data availability
The air sample sequencing data generated in this study have been deposited in the Sequence Read Archive (SRA) under bioproject PRJNA950127. The accession numbers for influenza C virus samples used in the phylogenetic analysis are provided in Supplementary Data 1.
Code availability
Code to replicate air sample sequencing analysis is available at https://github.com/dholab/air-metagenomics .
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:01 | Sci Rep. 2023 Dec 4; 13:21398 | oa_package/f3/2d/PMC10696062.tar.gz |
|
PMC10703939 | 38062134 | Introduction
Chronic kidney disease (CKD) is an age-related, dangerous, and progressive pathological condition that affects the reduction in kidney function 1 – 3 . It occurs when the kidneys are damaged and unable to effectively filter waste products from the blood. Over time, the condition may progress to end-stage renal disease (ESRD), where the kidneys lose their ability to perform their essential functions, and patients require kidney dialysis or a kidney transplant to survive 4 . Based on findings from a systematic review, it has been found that approximately 11-13% of the world population is affected by CKD, with the majority of cases falling within the stage of three to five. The incidence of CKD increases in direct proportion to the progression of age. This is supported by empirical evidence indicating that around 35% of individuals who are 70 years old or above are impacted by CKD 5 . CKD is associated with a higher susceptibility to cardiovascular disorders (CVD), such as strokes and heart attacks 6 . In the last 20 years, the prevalence of CKD has significant rise, affecting of the global population 7 . Majority of the cases are seen between stages 3 and 5 of CKD 5 . Patients diagnosed with CKD are highly susceptible to the development of cardiovascular diseases, which stand as the primary cause of mortality within this population. Accurate prediction of survival is essential for the management of CKD patients at a significant risk of heart diseases, as it can aid in guiding clinical decision-making and improving patient outcomes. The initial phases of CKD are often asymptomatic, which means that patients may not experience any noticeable symptoms until the disease has progressed to a more advanced stage 3 . As a result, early detection and management of CKD are crucial for preventing the disease’s progression to ESRD and reducing the risk of associated complications such as cardiovascular disease, anemia, and bone disease 8 . The diagnostic process of CKD typically involves blood and urine tests to assess kidney function and identify any abnormalities. Treatment may include medications to regulate blood pressure and blood sugar levels, dietary changes, and lifestyle modifications such as quitting smoking and increasing physical activity 9 . The causes of CKD can vary, but some common risk factors include hypertension, blood pressure, diabetes mellitus, cholesterol levels, smoking, obesity, and a family history of kidney disease 10 – 13 . Survival forecasting in patients with CKD has traditionally relied on clinical factors such as age, sex, coexisting medical conditions, and laboratory values. However, these factors may not accurately predict survival in all CKD patients, especially those with complex medical histories and multiple comorbidities. With the advent of machine learning algorithms and big data analytics, there is an opportunity to develop more accurate and personalized survival forecasting models for CKD patients 14 , 15 . In this study, we conducted an analysis on a dataset consisting of 467 patients released by Al-Shamsi et al. 7 in 2018. In their original study, the authors used multivariate Cox proportional hazard analysis to find the independent risk factors (older age, history of smoking, history of coronary heart disease, and history of diabetes mellitus) associated with developing CKD stages 3–5. In 2021, following the previous study, Davide et al. 16 conducted an analysis on the identical dataset. They focused on developing a machine learning approach that could effectively classify the progression of serious CKD and identify the key variables within the dataset. Through a feature ranking analysis, they determined that age, creatinine, and eGFR were the most significant clinical characteristics when the temporal component was absent, whereas hypertension, smoking, and diabetes played a crucial role when considering the year factor. Although the two studies 7 , 16 mentioned above presented interesting results and identified distinct risk factors associated with different stages of CKD, the existing literature lacks robust nomograms specifically designed to predict the risk of incident CKD in high-risk populations of CVD 17 . This study aims to fill this gap by developing a novel nomogram specifically designed for this particular population. The current nomogram serves as a straightforward and dependable tool for stratifying the risk of CKD among populations with a high risk of CVD. Utilizing a risk prediction tool to identify individuals at a higher risk of developing incident CKD can improve primary care for this condition. However, the primary healthcare system encounters several challenges, including a shortage of medical personnel, inadequate government funding, and excessive workloads. To address these issues, it is feasible, convenient, and widely accepted to construct a CKD risk prediction model using conventional data within the medical system, alongside improving chronic disease management techniques. Its purpose is to assist physicians in identifying individuals who are at risk and promptly implementing targeted prevention strategies. | Materials and methods
Dataset collection and subject information
The present investigation employed a dataset obtained from 7 , which included health records of 544 patients collected from Tawam Hospital located in Al-Ain city, Abu Dhabi, United Arab Emirates (UAE) between January 1, 2008, and December 31, 2008. Figure 1 shows the flow diagram of the study design and patient selection process.
A total of 467 patients were included according to the inclusion and exclusion criteria. Out of which, 234 were female patients and 233 were male patients, aged 23–89 years. Due to the retrospective nature of the study, the need for informed consent was waived by the Tawam Hospital and UAE University Research Ethics Board, which approved the study protocol under Application No. IRR536/17. The study was performed in accordance with the Declaration of Helsinki. All the patients were UAE citizens over the age of 20 and diagnosed with one or more of the following conditions: coronary heart disease (CHD), pre-hypertension, diabetes mellitus (DM) or prediabetes, vascular diseases, dyslipidemia, smoking, or being overweight or obese. The data collected includes the age of the patients ( , 50–60, and ), sex (female, male), smoking status (no, yes), obesity (no, yes), total cholesterol (TC), triglycerides (TG), estimated glomerular filtration rate (eGFR), glycosylated hemoglobin type A1C (HbA1C), systolic blood pressure (SBP), diastolic blood pressure (DBP), body mass index (BMI), and serum creatinine (Scr) of the patients. The study also includes disease parameters such as CHD (no, yes), diabetes mellitus (no, yes), hypertension (HTN) (no, yes), dyslipidemia (no, yes), and vascular diseases (no, yes), angiotensin-converting enzyme (ACE) inhibitors and angiotensin II receptor blockers (ARBs) use (no, yes). The category within the parentheses in the definition mentioned above serves as the reference group. Patients were recorded as having CHD if they had evidence of a coronary event, a coronary revascularization operation, or a cardiologist-determined diagnosis. Similarly, patients were categorized as having vascular disease based on specific criteria. These criteria included a documented history of cerebrovascular accident or transient ischemic stroke, a documented history of peripheral arterial disease, or the occurrence of revascularization for peripheral vascular disease. The exclusion criteria of this study were as follows: (i) eGFR less than 60 mL/min/1.73; (ii) patients with incomplete clinical data; (iii) the period of time during which the patient’s follow-up was lost. All dataset attributes refer to the patients’ initial visits in January 2008, except for the time-year variables and EventCKD35 (binary variables 0 and 1). The duration of the follow-up ended in June 2017. The binary variables 0 and 1 indicate that the patients are in CKD stages 1 or 2, and 3, 4, or 5, respectively. During the follow-up period, 54 patients (11.56%) with CKD stages 3–5 were identified in the entire cohort. In the context of this study, ‘time’ refers to the duration of the follow-up period subsequent to patients’ diagnosis and initiation of treatment, which is quantified in terms of survival months. In the sample of 54 patients, the average duration of follow-up was found to be 50 months, with the minimum observed follow-up period being 3 months.
Diagnostic criteria
The diagnostic criteria for CKD stages 3–5 were defined based on the eGFR and kidney damage, which can be assessed through various diagnostic tests and clinical evaluations. The Kidney Disease Improving Global Outcomes (KDIGO) was used to categorize patients with CKD into two groups: normal (eGFR is mL/min/1.73), and CKD stages 3–5 (eGFR is mL/min/1.73) 18 . The CKD epidemiology collaboration (CKD-EPI) creatinine equation was used to determine eGFR, as per the definition given below 19 : where denoted seram creatinine measured in , age is expressed in years, is a constant of 0.9 for ‘males’ and 0.7 for ‘females’, is a constant of for ‘males’ and for ‘females’, ‘min’ represents the ‘minimum’ value of or 1, and ‘max’ represents the ‘maximum’ value of or 1 19 – 21 . A factor of 1.0 was assigned for ethnicity due to the absence of African-descent subjects in this study. The BMI ranges used for identifying individuals as overweight and obese are 25–29.9 kg/ and kg/ , respectively. According to 22 , HTN was described as SBP over 140 mmHg, DBP over 90 mmHg, or taking medicine to treat high blood pressure. Diagnostic standards for dyslipidemia included serum TC values of mmol/L, serum TG levels of mmol/L, or the use of lipid-lowering drugs 23 . The reference ranges for creatinine were 58-96 mol/L for females and 53–115 mol/L for males 7 . Patients were considered to have a positive smoking history if they reported either current or past tobacco smoking. The definition of prediabetes and DM followed the guidelines set by the American Diabetes Association (ADA) 24 .
Model estimation and selection
To analyze the data, first, the non-parametric Kaplan–Meier (KM) estimator was used to measure the amount of time spent in follow-up and visualize the survival curves. Then, a semi-parametric Cox proportional hazard regression model was employed to describe the impact of the variables on the survival outcome. These methods are briefly detailed here.
Kaplan–Meier method
The KM method is a non-parametric modeling approach established by Kaplan & Meier in 1958 that predicts survival probability based on observed survival 25 . The general formula for determining the survival probability at time is as follows: where are the ordered unique event timings, and is the total number of patients that were ‘at risk’ prior to time . The variable represents the count of instances that have occurred at time . The estimated probability is a step function that begins with a horizontal line at a survival probability of 1 (when survival probability is ) and then steps down to zero as survival probability drops. The KM estimates model is used to perform an analysis of the survival probability. The survival time, measured in months, was the primary dependent variable. Follow-up time can be interpreted as a time to event (TTE), where the event would be CKD stages 1–2 or CKD stages 3–5. The non-parametric KM method has a significant drawback: it cannot represent survival probability with a smooth function, rendering it unable to make predictions. On the other hand, parametric models such as the exponential and weibull distribution models can overcome this limitation 26 . They serve as a logical progression from the KM method, bridging the gap and greatly improve understanding of survival analysis. Besides, in cases where parametric models are appropriate, they are more exact, more effective, and more informative than KM. The KM estimation curve fits with exponential and weibull distributions by considering statistical measures such as the AIC (Akaike Information Criterion) and maximum log-likelihood. A model with a smaller AIC value is a better fit, while a model with a higher (maximum) log-likelihood is a good fit. After running the initial analysis, it was seen that the weibull distribution has a larger loglikelihood of − 259.78 and the smallest AIC of 523.56 compared to exponential model estimates (loglikelihood: − 265.49, AIC: 532.98). So, weibull is a superior fit for the model because it follows the statistical preference of maximizing log-likelihood while minimizing AIC for fitting the model and making predictions.
Figure 2 shows the KM plots for the survival function of CKD patients in stages 3–5 and the visual distribution of both models. The Python programming language (version 3.10.12) and the “lifelines” package were used to estimate the KM curve 27 . It displays the time period (follow-up months) on the x -axis and survival probabilities on the y -axis. A notable disparity was observed with regards to patient survival. The exponential distribution survival plot, depicted by the green curve (Fig. 2 ), exhibits a slight deviation from the KM survival plot represented by the blue curve, whereas the orange plot aligns with it. The smooth rate of decrease observed in the described approach effectively characterizes the survival probability, surpassing the step-wise nature of the KM method, which experiences abrupt drops in probability only following an event while maintaining constant probabilities between events. In order to determine which model provides the best fit, a comparison of the quantile–quantile (Q–Q) plot (as shown in Fig. 3 ) is used to check the clustering of observations along a slope line 28 .
The Q–Q plot determines which distribution provides a better fit to the KM estimation survival curve. The distribution whose Q–Q plot aligns more closely with a straight line indicates a better fit to the data. If the points deviate significantly from a straight line, it indicates that the data does not fit the chosen distribution well. From Fig. 3 , it can be observed that the weibull distribution is a good fit for the model as most of the data points (observed data) seem to be clustered along the slope line. Hence, we can use the weibull distribution model to predict other features affecting CKD patients in stages 3–5; this will help us determine which features are most strongly associated with patients’ survival.
Cox proportional hazard model
The Cox proportional hazard model is a semi-parametric method that can be used to analyze survival-time outcomes, also known as time-to-event outcomes, based on one or more predictors 29 . The model demonstrates features of a general regression analysis, which enables the evaluation of different levels of a factor’s influence on survival time while accounting for other factors. Its functionality is highly similar to that of the logistic regression model, but instead of predicting a binary outcome, it focuses on time-to-event data. The computation of the regression coefficient enables the determination of the relative risk that is linked to the corresponding factor. The logistic regression model is designed to handle only qualitative variables as the dependent variable, such as the outcome of a case (the end event), without incorporating the duration of survival time. The Cox hazard-based model utilizes survival time and event occurrence as its dependent variables. The Cox proportional hazards model is presented in the following form of an equation 30 : where, t represents the time, and indicates a number of contributing factors. The relative risk function, denoted as , is solely dependent on the p explanatory variables and the regression parameter . The exponential values of are called hazard ratios (HR). A positive value of or a HR greater than one indicates that an increase in the covariate leads to an increase in the event hazard, resulting in a decrease in the survival length. In other words, a covariate with an HR over 1 is one that is positively correlated with the likelihood of an occurrence and hence negatively correlated with the duration of survival. | Results and discussion
In this study, a total of 467 participants with eGFR greater than or equal to 60 mL/min/1.73 was considered during every 3-month follow-up period from baseline visit to June, 30 2017. After a period of follow-up, a total of 54 new cases (male: 34; female: 20) of CKD stages 3–5 were identified. There are 233 males and 234 females in this study, and their ages range between 23 and 89 years old (Table 1 ).
The oldest male was 89 years old, and the oldest female was 79 years old. Among 233 males, 199 were in CKD stages 1–2 and 34 were in CKD stages 3–5. Similarly, among 234 females, 214 were in CKD stages 1–2 and 20 were in CKD stages 3–5. The dataset contains a total of 23 features (numerical and categorical) that report demographic, biochemical, and clinical information about the CKD patients. The categorical features include the gender of the patient. Additionally, personal history factors are considered, such as diabetes history, CHD history, vascular disease history, smoking history, HTN history, DLD history, and obesity history. Furthermore, specific-disease medicines, namely DLD medications, diabetes medications, HTN medications, and inhibitors (angiotensin-converting enzyme inhibitors or angiotensin II receptor blockers), are represented as binary values (0, 1). A descriptive statistical analysis was done using a mean ± standard deviation (SD) with an unpaired, two-tailed t -test for continuous variables and a frequency distribution for categorical variables (using the Chi-squared test) to find out about the patients and their medical conditions. The statistical quantitative description of the categorical and numerical features are described in Tables 2 and 3 , respectively. It has been observed from the Table 2 that CKD group subjects (stages 3–5) have a higher history of dyslipidemia (83.33% vs 63.68%), obesity (57.41% vs 51.33%), DLD-Meds (77.78% vs 53.75%), HTN (85.19% vs 60.29%), diabetes (87.04% vs 39.95%), CHD (31.48% vs 6.78%), vascular diseases (11.11% vs 5.08%), smoking (24.07% vs 13.56%), diabetes mellitus (75.93% vs 28.57%), and ACEIARB (77.78% vs 41.89%) than non-CKD group subjects (stages 1–2). The differences in baseline characteristics of the CKD and non-CKD groups (CKD stages 1–2) of subjects in this study are presented in Table 3 . The mean age of the non-CKD group ( years) was significantly lower than that of CKD group ( years). The levels of triglycerides (TG), glycosylated hemoglobin type A1C (HbA1C), serum creatinine (SCr), and systolic blood pressure (SBP) in the CKD group were significantly higher as compared to the non-CKD group, but the estimated glomerular filtration rate (eGFR), cholesterol, diastolic blood pressure (DBP), and body mass index (BMI) were lower. The data are expressed as the median, mean, and standard deviation. A p -value less than 0.05 was considered statistically significant. It has been observed from Table 3 that the p -value of the covariates such as age, cholesterol, triglycerides, HgbA1C, creatinine, eGFR, SBP, and time follow-up is less than 0.05, and this indicates that these variables had a significant impact on the CKD stage 3–5. The other covariates have no significant influence.
In this study, we employed the KM survival curve fitting approach in combination with the weibull distribution to analyze and model the survival data. The aim was to determine the “decay rate” with respect to the follow-up time period, which was used as the dependent variable for subsequent regression models. The initial step involved fitting the KM survival curve using the weibull distribution. We produced an accurate representation of the survival data by computing the two parameters of the Weibull distribution, (shape parameter) and (scaling parameter). This allowed us to calculate the shape and scale of the survival curve, providing valuable insights into the underlying survival trends. After obtaining the parameters and , we determined the decay rate for the follow-up time. This result was used as the dependent variable in our regression models. We employed two regression techniques: Support Vector Machine (SVM) 31 and Linear Regression (LR) 32 to investigate the relationship between the decay rate and other relevant features. To identify the most influential features, a feature ranking process was performed, which led to the selection of the top 11 predictors. Using the “SelectKBest” class in Python 3.10.12 with scikit-learn (version: 1.2.2), we specifically employed feature ranking to pinpoint the top 10 most relevant features. This method allowed us to extract features with the highest scores, as determined by statistical tests, underscoring their significance in our analysis and leveraging the chi-squared scoring function for feature selection. These top 11 features were carefully chosen to enhance both the predictive accuracy of our models and the interpretability of the results. Subsequently, these selected features served as the inputs for our regression models, contributing to a more comprehensive understanding of the relationship between these features and the decay rate. For our regression analysis, we adopted a data partitioning strategy, allocating 70% of the data for training the model and reserving the remaining 30% for testing and validation purposes. To assess the performance of the regression analyses, different metrics are used, namely R-score (R-squared), mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE). The MAE is a matric used to measure the average squared difference between the original and predicted values obtained by averaging the absolute differences over the entire dataset. It gives an indication of how close the predictions are to the actual values. The MSE is a measure of the average squared difference between the original values and the predicted values. It is calculated by squaring the average difference over the dataset. RMSE is generated from MSE and provides the error rate of the prediction model. It is evaluated by taking the square root of MSE. RMSE is a popular metric since it provides a measure of the average magnitude of the prediction errors. It helps to understand the magnitude of the errors in the predictions. R-squared, alternatively referred to as the coefficient of determination, It indicates the goodness of fit of the model by measuring how well the predicted values align with the original values. R-squared can be interpreted as the percentage of variability in the dependent variable that is explained by the independent variables. The value of R-squared ranges between 0 and 1, with a higher R-squared value indicating a better fit and 1 representing a perfect fit. The scores obtained from both the SVM and Linear Regression models were tabulated and compared in Table 4 in order to select the best prediction model.
Based on the comparison results provided in Table 4 , it is evident that linear models exhibit superior performance on this dataset. In order to obtain an optimal regression model, it is desirable to minimize the error, aiming for a value close to zero, while simultaneously maximizing the variability of the target variable explained by the features, striving for a value close to one. Interestingly, the results indicated that the Linear Regression model outperformed the SVM model, demonstrating better predictive accuracy for the used dataset. Therefore, we consider linear regression models having the lowest RMSE (0.069526) and the highest (0.954079) as the final prediction models. The performance of the linear regression model was assessed by comparing the actual observed values with the predicted values. Figure 4 presents the ‘Actual vs. Prediction’ plot, where each data point represents an observation in the dataset. The x -axis represents the observed values of the dependent variable, while the y -axis corresponds to the predicted values based on the regression model.
It can be observed from the plot that the majority of the data points align along a diagonal line, indicating a reasonably strong linear relationship between the predicted and actual values. This alignment indicates that the model has successfully captured the underlying trends in the data. However, it is evident that a small number of data points deviate from the diagonal line, indicating a certain level of discrepancy or inaccuracy in the predictions. These deviations could be attributed to various factors, such as measurement errors or unaccounted variables that influence the dependent variable. The ‘Actual vs. Prediction’ plot demonstrates the satisfactory performance of the linear regression model in capturing the inherent relationship between the predictors and the dependent variable. The model’s capability to predict values that fall within a reasonable range of the observed values suggests its reliability for making accurate predictions and extracting meaningful insights from the data. We have conducted a thorough evaluation of our predictive model using the five-fold cross-validation approach. This approach involves partitioning the dataset into five subsets, training the model on four subsets, and evaluating its performance on the remaining subset. This process is repeated five times, ensuring that each subset serves as the validation set exactly once. Table 5 provides a comparison of the cross-validation-based model performance metrics. By utilizing the cross-validation approach, we have ensured a robust assessment of its performance. The results from this comprehensive evaluation confirm that our predictive model is reliable and demonstrate its effectiveness.
To estimate the impact of various covariates on CKD stage 3–5, a semi-parametric Cox hazard model was fitted using the ‘lifelines’ module in Python 3.10.12; the obtained results are presented in Table 6 .
The HR and corresponding p values for each of the twenty one variable sets are listed in this table. The HR was used to evaluate the relative risk of a variable. If the HR is greater than one, it implies that the variable is positively connected with the likelihood of CKD stage 3–5 and negatively correlated with survival time. On the other hand, if the HR is less than one, it shows that the correlation is in the other direction. It has been observed from Table 6 that the p -value of the covariates such as history of CHD, DLD medications and SBP is less than 0.05, and this indicates that these variables had a significant impact on the CKD stage 3–5. The other covariates have no significant influence. The p -value for history of CHD is and the HR is 4.0603 indicating a strong relationship between the patients’ history of CHD and CKD stage 3–5. The variable ranking based on CKD stage 3–5 is illustrated in Fig. 5 .
The figure provides a forest plot reporting the HR and the confidence intervals (CI) of the HR for each covariate included in the Cox proportional hazards model. Only history of CHD, DLD medications, and SBP were found to be significant with 0.05 cutoff. It is evident from looking at the figure that history of CHD have a positive influence on survival time while DLD medications have a negative influence on the survival time. The concordance index, or C-index 33 , provides a measure of the discriminative ability of the KM estimate and the Cox Proportional Hazards model in our study. Remarkably, the KM estimate achieved a perfect C-index of 1.0, signifying its impeccable ability to distinguish between different outcomes and accurately order survival times within our dataset. In contrast, the Cox Proportional Hazards model yielded a C-index of 0.7510, indicating a substantial but not flawless discriminatory power. This comparison suggests that the KM estimate outperforms the Cox model in terms of discrimination, demonstrating an unparalleled capacity to precisely predict survival outcomes within our specific context. The KM estimate and the Cox Proportional Hazards model are both important tools in survival analysis, but they serve different purposes and have distinct advantages. Here are some advantages of the KM estimate over the Cox Proportional Hazards model: (i) KM estimates provide a non-parametric way to estimate survival curves. They make no assumptions about the underlying hazard function, which can be advantageous when the assumptions of the Cox model do not hold, (ii) KM curves are easily interpretable and can be plotted to visualize survival probabilities over time for different groups or categories. This makes them valuable for descriptive and exploratory analysis, (iii) KM analysis is relatively simple and does not involve the complexities of modeling covariates. It’s a suitable choice when you want to focus solely on estimating and comparing survival probabilities between groups, (iv) KM is the method of choice when the primary goal is to examine and describe the time-to-event data without modeling covariates. It is particularly useful for studying event occurrence in clinical trials and observational studies. However, it is important to note that while the KM estimate has these advantages, it is limited in its ability to model the impact of covariates on survival time and does not provide HRs. For such analyses, the Cox proportional hazards model may be more appropriate. Following the selection of the superior regression model, we extracted the coefficients and intercept values from the model. These coefficients and intercepts were crucial in constructing a nomogram. A nomogram is a graphical representation that provides a simple and intuitive tool for predicting outcomes based on the regression model. It consists of four lines: the point line, the line for the risk factor, the line for the probability, and the line for the total number of points. The process of constructing these lines has been previously explained 34 , 35 . The point line is built by assigning values ranging from 0 to 100. The linear predictor ( ) value is determined based on a coefficient derived from a fitted regression model. If the independent attributes is a categorical with n categories, and ( ) dummy variables are generated. The formula for is as follows: Using this formula, are calculated for each risk category and aligned to the respective risk factor lines. The calculation for is as follows: where represents the regression coefficient value for the n th category of the m th risk factor. indicates the value of the risk factor with the largest estimated range of attribute values. The probability line indicates the probability value associated with a given total point, which spans the range from 0 to 1. The total point line is derived by cumulatively summing up the values. The Logistic Regression model is represented by the expression . The total number of points corresponding to each value of the probability line can be determined by substituting this equation into the previous expression. In this equation, the value on the probability line, is substituted to construct the total point line. By utilizing the coefficients and intercept value ( ), a nomogram can be developed as shown in Fig. 6 to aid in clinical decision-making and risk assessment 34 .
To predict the risk of CKD stages 3–5 for a patient with the following values: gender = 0, age = 89, history of smoking = 1, DM medications = 1, SBP = 92, and time follow-up = 5 months, each value is assigned to its respective points as illustrated in Fig. 7 .
The resulting point values obtained are as follows: 38, 100, 20, 0, 28, and 65. These numbers are then summed to get an overall point value of 251, which may be used to assess the risk of CKD stages 3 to 5 by consulting the nomogram’s given curve. Using these data, we may estimate that this patient has a 0.58% chance of developing CKD stages 3–5. This example demonstrates the practical applications of nomograms to predict clinical outcomes. Figure 8 shows the nomogram results indicating the risk scores based on the established logistic regression model during the follow-up periods of 31–50 and 81–95 months, respectively.
Additionally, supplementary Figs. S1 , S2 , S3 , and S4 provided the corresponding results for the follow-up periods of 16–30 months, 51–65 months, 66–80 months, and 96–111 months, respectively. The nomogram assessment considered various factors such as age, gender, medical history, laboratory results, and specific risk factors associated with CKD stages 3–5. By integrating these factors, we have generated personalized risk scores for each patient. These risk scores are visually represented in Fig. 9 and the summary of results is provided in supplementary Table ST1 .
The plot depicting the patient’s ID versus risk score for CKD stages 3–5 provides a visual representation of the varying levels of risk associated with individual patients within these stages. The x -axis of the plot corresponds to the patient ID, which is a unique identifier assigned to each patient within the dataset. The patient IDs are organized in ascending order, meaning that the patients’ data points will be plotted sequentially along the x -axis. The vertical y -axis, is used to represent the risk score associated with stages 3–5 of CKD. The risk score is a quantitative measure that evaluates the probability or seriousness of complications associated with CKD. Through an analysis of the plot, one can observe the distribution of risk scores across the patients with CKD stages 3–5. Higher risk scores are typically associated with patients who have a higher probability of developing complications from their kidney disease. Conversely, lower risk scores indicate a lower probability of such events occurring. The plot allows healthcare professionals to visually identify the risk scores of patients with CKD stages 3–5. It can assist in identifying patients who may require closer monitoring, targeted interventions, or specialized care based on their individual risk profiles. Additionally, the plot can provide insights into the overall distribution of risk scores within this specific CKD population, helping to inform future clinical decision-making. The study has several flaws: (i) the small size of the datasets; (ii) since patient mortality was not taken into account in this study, the incidence of CKD may be underestimated; (iii) more information about the patient’s physical features and work history would have helped find other risk factors for cardiovascular diseases; and (iv) if a similar dataset with similar characteristics from a different part of the world had been available, it would have been helpful. | Results and discussion
In this study, a total of 467 participants with eGFR greater than or equal to 60 mL/min/1.73 was considered during every 3-month follow-up period from baseline visit to June, 30 2017. After a period of follow-up, a total of 54 new cases (male: 34; female: 20) of CKD stages 3–5 were identified. There are 233 males and 234 females in this study, and their ages range between 23 and 89 years old (Table 1 ).
The oldest male was 89 years old, and the oldest female was 79 years old. Among 233 males, 199 were in CKD stages 1–2 and 34 were in CKD stages 3–5. Similarly, among 234 females, 214 were in CKD stages 1–2 and 20 were in CKD stages 3–5. The dataset contains a total of 23 features (numerical and categorical) that report demographic, biochemical, and clinical information about the CKD patients. The categorical features include the gender of the patient. Additionally, personal history factors are considered, such as diabetes history, CHD history, vascular disease history, smoking history, HTN history, DLD history, and obesity history. Furthermore, specific-disease medicines, namely DLD medications, diabetes medications, HTN medications, and inhibitors (angiotensin-converting enzyme inhibitors or angiotensin II receptor blockers), are represented as binary values (0, 1). A descriptive statistical analysis was done using a mean ± standard deviation (SD) with an unpaired, two-tailed t -test for continuous variables and a frequency distribution for categorical variables (using the Chi-squared test) to find out about the patients and their medical conditions. The statistical quantitative description of the categorical and numerical features are described in Tables 2 and 3 , respectively. It has been observed from the Table 2 that CKD group subjects (stages 3–5) have a higher history of dyslipidemia (83.33% vs 63.68%), obesity (57.41% vs 51.33%), DLD-Meds (77.78% vs 53.75%), HTN (85.19% vs 60.29%), diabetes (87.04% vs 39.95%), CHD (31.48% vs 6.78%), vascular diseases (11.11% vs 5.08%), smoking (24.07% vs 13.56%), diabetes mellitus (75.93% vs 28.57%), and ACEIARB (77.78% vs 41.89%) than non-CKD group subjects (stages 1–2). The differences in baseline characteristics of the CKD and non-CKD groups (CKD stages 1–2) of subjects in this study are presented in Table 3 . The mean age of the non-CKD group ( years) was significantly lower than that of CKD group ( years). The levels of triglycerides (TG), glycosylated hemoglobin type A1C (HbA1C), serum creatinine (SCr), and systolic blood pressure (SBP) in the CKD group were significantly higher as compared to the non-CKD group, but the estimated glomerular filtration rate (eGFR), cholesterol, diastolic blood pressure (DBP), and body mass index (BMI) were lower. The data are expressed as the median, mean, and standard deviation. A p -value less than 0.05 was considered statistically significant. It has been observed from Table 3 that the p -value of the covariates such as age, cholesterol, triglycerides, HgbA1C, creatinine, eGFR, SBP, and time follow-up is less than 0.05, and this indicates that these variables had a significant impact on the CKD stage 3–5. The other covariates have no significant influence.
In this study, we employed the KM survival curve fitting approach in combination with the weibull distribution to analyze and model the survival data. The aim was to determine the “decay rate” with respect to the follow-up time period, which was used as the dependent variable for subsequent regression models. The initial step involved fitting the KM survival curve using the weibull distribution. We produced an accurate representation of the survival data by computing the two parameters of the Weibull distribution, (shape parameter) and (scaling parameter). This allowed us to calculate the shape and scale of the survival curve, providing valuable insights into the underlying survival trends. After obtaining the parameters and , we determined the decay rate for the follow-up time. This result was used as the dependent variable in our regression models. We employed two regression techniques: Support Vector Machine (SVM) 31 and Linear Regression (LR) 32 to investigate the relationship between the decay rate and other relevant features. To identify the most influential features, a feature ranking process was performed, which led to the selection of the top 11 predictors. Using the “SelectKBest” class in Python 3.10.12 with scikit-learn (version: 1.2.2), we specifically employed feature ranking to pinpoint the top 10 most relevant features. This method allowed us to extract features with the highest scores, as determined by statistical tests, underscoring their significance in our analysis and leveraging the chi-squared scoring function for feature selection. These top 11 features were carefully chosen to enhance both the predictive accuracy of our models and the interpretability of the results. Subsequently, these selected features served as the inputs for our regression models, contributing to a more comprehensive understanding of the relationship between these features and the decay rate. For our regression analysis, we adopted a data partitioning strategy, allocating 70% of the data for training the model and reserving the remaining 30% for testing and validation purposes. To assess the performance of the regression analyses, different metrics are used, namely R-score (R-squared), mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE). The MAE is a matric used to measure the average squared difference between the original and predicted values obtained by averaging the absolute differences over the entire dataset. It gives an indication of how close the predictions are to the actual values. The MSE is a measure of the average squared difference between the original values and the predicted values. It is calculated by squaring the average difference over the dataset. RMSE is generated from MSE and provides the error rate of the prediction model. It is evaluated by taking the square root of MSE. RMSE is a popular metric since it provides a measure of the average magnitude of the prediction errors. It helps to understand the magnitude of the errors in the predictions. R-squared, alternatively referred to as the coefficient of determination, It indicates the goodness of fit of the model by measuring how well the predicted values align with the original values. R-squared can be interpreted as the percentage of variability in the dependent variable that is explained by the independent variables. The value of R-squared ranges between 0 and 1, with a higher R-squared value indicating a better fit and 1 representing a perfect fit. The scores obtained from both the SVM and Linear Regression models were tabulated and compared in Table 4 in order to select the best prediction model.
Based on the comparison results provided in Table 4 , it is evident that linear models exhibit superior performance on this dataset. In order to obtain an optimal regression model, it is desirable to minimize the error, aiming for a value close to zero, while simultaneously maximizing the variability of the target variable explained by the features, striving for a value close to one. Interestingly, the results indicated that the Linear Regression model outperformed the SVM model, demonstrating better predictive accuracy for the used dataset. Therefore, we consider linear regression models having the lowest RMSE (0.069526) and the highest (0.954079) as the final prediction models. The performance of the linear regression model was assessed by comparing the actual observed values with the predicted values. Figure 4 presents the ‘Actual vs. Prediction’ plot, where each data point represents an observation in the dataset. The x -axis represents the observed values of the dependent variable, while the y -axis corresponds to the predicted values based on the regression model.
It can be observed from the plot that the majority of the data points align along a diagonal line, indicating a reasonably strong linear relationship between the predicted and actual values. This alignment indicates that the model has successfully captured the underlying trends in the data. However, it is evident that a small number of data points deviate from the diagonal line, indicating a certain level of discrepancy or inaccuracy in the predictions. These deviations could be attributed to various factors, such as measurement errors or unaccounted variables that influence the dependent variable. The ‘Actual vs. Prediction’ plot demonstrates the satisfactory performance of the linear regression model in capturing the inherent relationship between the predictors and the dependent variable. The model’s capability to predict values that fall within a reasonable range of the observed values suggests its reliability for making accurate predictions and extracting meaningful insights from the data. We have conducted a thorough evaluation of our predictive model using the five-fold cross-validation approach. This approach involves partitioning the dataset into five subsets, training the model on four subsets, and evaluating its performance on the remaining subset. This process is repeated five times, ensuring that each subset serves as the validation set exactly once. Table 5 provides a comparison of the cross-validation-based model performance metrics. By utilizing the cross-validation approach, we have ensured a robust assessment of its performance. The results from this comprehensive evaluation confirm that our predictive model is reliable and demonstrate its effectiveness.
To estimate the impact of various covariates on CKD stage 3–5, a semi-parametric Cox hazard model was fitted using the ‘lifelines’ module in Python 3.10.12; the obtained results are presented in Table 6 .
The HR and corresponding p values for each of the twenty one variable sets are listed in this table. The HR was used to evaluate the relative risk of a variable. If the HR is greater than one, it implies that the variable is positively connected with the likelihood of CKD stage 3–5 and negatively correlated with survival time. On the other hand, if the HR is less than one, it shows that the correlation is in the other direction. It has been observed from Table 6 that the p -value of the covariates such as history of CHD, DLD medications and SBP is less than 0.05, and this indicates that these variables had a significant impact on the CKD stage 3–5. The other covariates have no significant influence. The p -value for history of CHD is and the HR is 4.0603 indicating a strong relationship between the patients’ history of CHD and CKD stage 3–5. The variable ranking based on CKD stage 3–5 is illustrated in Fig. 5 .
The figure provides a forest plot reporting the HR and the confidence intervals (CI) of the HR for each covariate included in the Cox proportional hazards model. Only history of CHD, DLD medications, and SBP were found to be significant with 0.05 cutoff. It is evident from looking at the figure that history of CHD have a positive influence on survival time while DLD medications have a negative influence on the survival time. The concordance index, or C-index 33 , provides a measure of the discriminative ability of the KM estimate and the Cox Proportional Hazards model in our study. Remarkably, the KM estimate achieved a perfect C-index of 1.0, signifying its impeccable ability to distinguish between different outcomes and accurately order survival times within our dataset. In contrast, the Cox Proportional Hazards model yielded a C-index of 0.7510, indicating a substantial but not flawless discriminatory power. This comparison suggests that the KM estimate outperforms the Cox model in terms of discrimination, demonstrating an unparalleled capacity to precisely predict survival outcomes within our specific context. The KM estimate and the Cox Proportional Hazards model are both important tools in survival analysis, but they serve different purposes and have distinct advantages. Here are some advantages of the KM estimate over the Cox Proportional Hazards model: (i) KM estimates provide a non-parametric way to estimate survival curves. They make no assumptions about the underlying hazard function, which can be advantageous when the assumptions of the Cox model do not hold, (ii) KM curves are easily interpretable and can be plotted to visualize survival probabilities over time for different groups or categories. This makes them valuable for descriptive and exploratory analysis, (iii) KM analysis is relatively simple and does not involve the complexities of modeling covariates. It’s a suitable choice when you want to focus solely on estimating and comparing survival probabilities between groups, (iv) KM is the method of choice when the primary goal is to examine and describe the time-to-event data without modeling covariates. It is particularly useful for studying event occurrence in clinical trials and observational studies. However, it is important to note that while the KM estimate has these advantages, it is limited in its ability to model the impact of covariates on survival time and does not provide HRs. For such analyses, the Cox proportional hazards model may be more appropriate. Following the selection of the superior regression model, we extracted the coefficients and intercept values from the model. These coefficients and intercepts were crucial in constructing a nomogram. A nomogram is a graphical representation that provides a simple and intuitive tool for predicting outcomes based on the regression model. It consists of four lines: the point line, the line for the risk factor, the line for the probability, and the line for the total number of points. The process of constructing these lines has been previously explained 34 , 35 . The point line is built by assigning values ranging from 0 to 100. The linear predictor ( ) value is determined based on a coefficient derived from a fitted regression model. If the independent attributes is a categorical with n categories, and ( ) dummy variables are generated. The formula for is as follows: Using this formula, are calculated for each risk category and aligned to the respective risk factor lines. The calculation for is as follows: where represents the regression coefficient value for the n th category of the m th risk factor. indicates the value of the risk factor with the largest estimated range of attribute values. The probability line indicates the probability value associated with a given total point, which spans the range from 0 to 1. The total point line is derived by cumulatively summing up the values. The Logistic Regression model is represented by the expression . The total number of points corresponding to each value of the probability line can be determined by substituting this equation into the previous expression. In this equation, the value on the probability line, is substituted to construct the total point line. By utilizing the coefficients and intercept value ( ), a nomogram can be developed as shown in Fig. 6 to aid in clinical decision-making and risk assessment 34 .
To predict the risk of CKD stages 3–5 for a patient with the following values: gender = 0, age = 89, history of smoking = 1, DM medications = 1, SBP = 92, and time follow-up = 5 months, each value is assigned to its respective points as illustrated in Fig. 7 .
The resulting point values obtained are as follows: 38, 100, 20, 0, 28, and 65. These numbers are then summed to get an overall point value of 251, which may be used to assess the risk of CKD stages 3 to 5 by consulting the nomogram’s given curve. Using these data, we may estimate that this patient has a 0.58% chance of developing CKD stages 3–5. This example demonstrates the practical applications of nomograms to predict clinical outcomes. Figure 8 shows the nomogram results indicating the risk scores based on the established logistic regression model during the follow-up periods of 31–50 and 81–95 months, respectively.
Additionally, supplementary Figs. S1 , S2 , S3 , and S4 provided the corresponding results for the follow-up periods of 16–30 months, 51–65 months, 66–80 months, and 96–111 months, respectively. The nomogram assessment considered various factors such as age, gender, medical history, laboratory results, and specific risk factors associated with CKD stages 3–5. By integrating these factors, we have generated personalized risk scores for each patient. These risk scores are visually represented in Fig. 9 and the summary of results is provided in supplementary Table ST1 .
The plot depicting the patient’s ID versus risk score for CKD stages 3–5 provides a visual representation of the varying levels of risk associated with individual patients within these stages. The x -axis of the plot corresponds to the patient ID, which is a unique identifier assigned to each patient within the dataset. The patient IDs are organized in ascending order, meaning that the patients’ data points will be plotted sequentially along the x -axis. The vertical y -axis, is used to represent the risk score associated with stages 3–5 of CKD. The risk score is a quantitative measure that evaluates the probability or seriousness of complications associated with CKD. Through an analysis of the plot, one can observe the distribution of risk scores across the patients with CKD stages 3–5. Higher risk scores are typically associated with patients who have a higher probability of developing complications from their kidney disease. Conversely, lower risk scores indicate a lower probability of such events occurring. The plot allows healthcare professionals to visually identify the risk scores of patients with CKD stages 3–5. It can assist in identifying patients who may require closer monitoring, targeted interventions, or specialized care based on their individual risk profiles. Additionally, the plot can provide insights into the overall distribution of risk scores within this specific CKD population, helping to inform future clinical decision-making. The study has several flaws: (i) the small size of the datasets; (ii) since patient mortality was not taken into account in this study, the incidence of CKD may be underestimated; (iii) more information about the patient’s physical features and work history would have helped find other risk factors for cardiovascular diseases; and (iv) if a similar dataset with similar characteristics from a different part of the world had been available, it would have been helpful. | Conclusion
This study presents a novel machine learning-driven nomogram for predicting CKD stages 3–5. The proposed approach offers an accurate and personalized risk assessment tool with the potential to improve early detection and preventive strategies. The integration of advanced machine learning algorithms and comprehensive patient data contributes to the robustness and reliability of the developed nomogram. This proposed nomogram has great predictive capacity and may have major clinical implications for diagnosing CKD stages 3–5. Future research needs to focus on the integration of additional data sources and validation through prospective studies, fostering the translation of this nomogram into clinical practice, and improving patient outcomes. | Chronic kidney disease (CKD) remains one of the most prominent global causes of mortality worldwide, necessitating accurate prediction models for early detection and prevention. In recent years, machine learning (ML) techniques have exhibited promising outcomes across various medical applications. This study introduces a novel ML-driven nomogram approach for early identification of individuals at risk for developing CKD stages 3–5. This retrospective study employed a comprehensive dataset comprised of clinical and laboratory variables from a large cohort of diagnosed CKD patients. Advanced ML algorithms, including feature selection and regression models, were applied to build a predictive model. Among 467 participants, 11.56% developed CKD stages 3–5 over a 9-year follow-up. Several factors, such as age, gender, medical history, and laboratory results, independently exhibited significant associations with CKD (p < 0.05) and were utilized to create a risk function. The Linear regression (LR)-based model achieved an impressive R-score (coefficient of determination) of 0.954079, while the support vector machine (SVM) achieved a slightly lower value. An LR-based nomogram was developed to facilitate the process of risk identification and management. The ML-driven nomogram demonstrated superior performance when compared to traditional prediction models, showcasing its potential as a valuable clinical tool for the early detection and prevention of CKD. Further studies should focus on refining the model and validating its performance in diverse populations.
Subject terms | Supplementary Information
| Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-023-48815-w.
Acknowledgements
The authors would like to acknowledge Saif Al-Shamsi from UAE University provided additional information about the dataset.
Author contributions
Research idea and study design: S.K.G, A.H.K; data search and select: S.K.G; data extraction/analysis/interpretation: S.K.G; methodology: S.K.G, A.H.K; supervision: A.H.K; writing—original draft: S.K.G; writing—review and editing: S.K.G, A.H.K.
Funding
This work was supported by Khalifa University, Abu Dhabi, United Arab Emirates, under grant/award number 8474000408.
Data availability
All data relevant to the study are included in the article or uploaded as supplementary information. The datasets utilized and/or examined in the present study can be accessed from the following source: https://figshare.com/articles/dataset/6711155?file=12242270.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:02 | Sci Rep. 2023 Dec 7; 13:21613 | oa_package/7b/50/PMC10703939.tar.gz |
PMC10704617 | 38062447 | Background
The national toll-free phone number for emergency medical assistance in Norway is 113. The 16 emergency medical communication centers (EMCC) are organised as public services within the specialized hospital healthcare system and consist of a network of control rooms. When contacting 113, the call is received by trained and certified health personnel (registered nurses and paramedics). The police and fire rescue services have separate national emergency phone numbers (112 and 110, respectively). This system differs from many other countries, such as Denmark, Sweden, Finland, and the United Kingdom, where there is one phone number (112) for all emergencies [ 1 ]. EMCC operators play an essential role in helping callers and are the public’s first contact with the healthcare system when facing a medical emergency [ 2 ].
When handling an emergency call, the EMCC operator needs to assess the situation and decide on a solution, as a 113-call does not automatically result in ambulance dispatch. The operator gathers information from the caller, provides adequate advice and instructions, and either dispatches an ambulance or transfers the call to a local public emergency room for less urgent situations. For decision support, the dispatchers use the Norwegian Index for Medical Emergency Assistance [ 3 ].
In the context of an emergency call, the EMCC operator relies on information provided by the caller, which depends on the EMCC operator’s ability to fully comprehend and appreciate the situation. Hence, the interaction between the EMCC and the caller is vital, and understanding the dynamics of this communication is paramount [ 4 ].
It is known that being in a dramatic situation, such as a medical emergency, can profoundly affect a person’s life [ 5 ]. The nationwide project “Saving lives together” (“ Sammen redder vi liv ”) recommended further research to enhance the interaction between the caller and the EMCC [ 6 ].
Previous research focused primarily on dispatcher roles and perspectives [ 7 , 8 ]. Some studies have discussed how a dispatcher’s early recognition of cardiac arrest can increase patient out-of-hospital cardiac arrest survival rates. Conversely, others have shed light on the effect of successful communication between the EMCC and the caller [ 9 , 10 ]. However, to our knowledge, no previous studies have addressed this issue from the callers’ perspective. Hence, this study aimed to obtain a better understanding of callers’ actual experiences and how they perceived their interaction with the EMCC during 113 calls. | Material and methods
Setting
The study was conducted in Bergen, the second-largest city in Norway. The Bergen EMCC covers an area of approximately 460,000 inhabitants and handles nearly 60,000 calls annually. To get in contact with past callers to the emergency medical number, a SMS message was sent to all mobile phone numbers, from which 113 calls were made between November 2020 and February 2021 (Additional file 1 ). The Christmas period (10 days), and approximately five well-known individual callers, were excluded because this period differs from the rest of the year, and to protect these assumed vulnerable individuals.
The aim was to measure the general level of satisfaction, the callers’ willingness to provide feedback, and recruiting informants for the qualitative part of the study.
Quantitative part: SMS-study
In the SMS, the callers were asked to rate their recent conversation with the EMCC on a scale from 1 to 6 (1 = “very unsatisfied” and 6 = “very satisfied”). The SMS also included a second question asking whether the respondents were willing to be contacted by the researchers for further questions. The text messages were sent only once.
Qualitative part: interviews
The second and qualitative part of the study consisted of semi-structured interviews with the informants recruited in the first part. As we had limited previous data base our interviews on, and the wish for fully grasping the caller’s experience, we wanted the informants speak as freely as possible, but at the same time ensure some structure. Therefore, we based the interviews on a semi-structured interview guide (Additional file 2 ). Follow-up questions were based on the informants’ answers, inspired by the systematic methodology of thematic analysis [ 11 ]. All informants were asked the same main questions.
To test the interview guide, we conducted expert interviews with a former dispatcher with more than 20 years of experience. This also helped providing additional follow-up questions in the interview guide. The expert was specifically asked what he would have wanted to know about the caller’s experience and what kind of information would be helpful or provide insight for a dispatcher doing his job.
We then conducted 31 semi-structured interviews with 16 satisfied callers (ratings 6–4), and 15 dissatisfied callers (rating 1–3). We strived for a balanced numbers for the interviews. However, the selection did not reflect the distribution of responses, as most of the respondents were satisfied. Therefore, we chose to contact all most unsatisfied respondents (ratings 1 and 2), but could not contact the same proportion of all the most satisfied respondents (ratings 5 and 6).
Two researchers (TS and KN) conducted all interviews with the informants in March and April of 2021, either by phone (81%) or via video conferencing (19%) due to Covid-19 restrictions.
Data analysis
Interview data collection, including notes, transcription, first-hand analysis, coding, sorting, and analysis, are important steps to ensure that data are properly processed [ 12 ]. The interviews were audio recorded, and later transcribed manually. Immediately following each interview, the researchers conducted a first-hand analysis, comparing their first impressions and interpretations. This first-hand analysis was particularly useful ensuring that the essence of each interview was identified. These sessions were recorded and helpful later in the process, permitting go back and see whether the analysis was consistent or had changed from our first impression.
This revealed a pattern emerging from the very beginning, identifying clear key words for coding, and also we quickly got saturated data.
Some codes were suitable for most informants, while some unexpected codes were emerging in several interviews. Then, the interviews were transcribed and analysed in more depth. Initially we identified 83 nodes, with 1 to 110 citations per node.
Thereafter, analysis of each code followed, merging them into relevant subgroups and main themes, as they were understood to belong under the same umbrella. The main theme categories emerged naturally, and made it possible to analyse several codes that turned into being associated. Eventually, this resulted in seven main categories as presented in Table 2 . | Results
A total of 4807 SMS messages were sent to recent callers, of which 1680 (35%) responded. The vast majority (88%, rating 5 or 6) were very satisfied with the 113-call (Fig. 1 ).
Based on the solid responses to the SMS survey, it was evident that people needed or wanted to provide feedback on their experiences with the 113 services. The specific numbers were also the first measure of the callers’ general level of satisfaction prior to further in-depth research into this concept.
A total of 823 (49%) respondents volunteered for the interviews, making informant recruitment accessible. Of these, 50 callers were randomly, and equally selected for all values (1 to 6). We were unable to perform 19 interviews because of either unavailability or lack of answers when calling the selected numbers. The 31 callers interviewed (14 (45%) men and 17 (55%) women) had surprisingly miscellaneous background. Seven patients had a relevant medical history. Some called 113 for the first time, whereas 21 (68%) had previous experience (Table 1 ).
By interviewing the 31 informants, including both satisfied and dissatisfied individuals, several factors were identified. Table 2 summarizes the main results from the interview. In addition, we found that some topics were unexpectedly mentioned as important to callers. In Additional file 3 more quotes are given with their corresponding codes and main themes.
Callers expressed a high threshold for calling the EMCC. None of the informants stated that calling 113 was their first choice; many other options were considered first. Several informants stated that they wanted to stay in the comfort of their own homes rather than in a hospital or emergency room. They preferred to take care of themselves, and most importantly, they did not want to become sick. Several participants also expressed concerns about being anticipated as hysterical when calling an EMCC.
Participant 24 (rating 4) described the fear of being perceived as a hysterical parent. Participant 13 (rated 2) said that although his doctor advised him immediately to call 113 when needed, he wanted to wait as long as possible before calling. Several informants said that they needed to be certain before calling for help, as they did not want to unnecessarily disturb the EMCC or exploit public resources. They did not want to place a financial burden on the healthcare system or their families and communities. Participant 28 (rated 3) described calling 113 as a difficult choice, as it could have consequences for other people in more urgent need, meaning that if they were to be helped, someone else who possibly needs an ambulance more than they would not receive a timely dispatch.
Hence, the choice of calling 113 seemed to be difficult. The informants said they would only make the call if they had no other options. Participant 3 (rated 6) described calling 113 as their ‘biggest cry for help.’ Several participants explained that bad experiences while calling the EMCC increased the threshold for future calls.
Nearly all informants with positive experiences stated that the EMCC reassured them that they had made the right decision when calling for help, even when an ambulance dispatch was not the final solution. This is particularly important if the caller has had previous negative experiences.
Even medically trained participants expressed hesitation in challenging the operator’s decisions. For example, informant 5, a nurse (rated 1), perceived that they could not ask for an ambulance. The result was a higher threshold because they dreaded calling back or did not expect to obtain sufficient help. Another instance was a call made by Informant 15 (rated 2), who wondered, after several rejections, whether it was worth calling 113.
The caller’s expectations
All participants were expected to receive help soon after the call. Most informants had the impression that calling 113 equaled getting an ambulance and/or being admitted to the hospital. Only a few callers, most of whom had a medical background, called 113 for advice.
Several informants expected the dispatcher to know or even be able to see the location from which they were calling and that the EMCC had full access to their previous medical records. Most callers did not distinguish between the various organizational units within the healthcare system, as they considered their uniform.
Considering the callers’ high threshold for calling 113 and their deep wish to take care of themselves, many described a feeling of relief when the operator said that there was no need for an ambulance. The callers also understood that the EMCC had to prioritize resources, especially when their situation was not urgent.
The context and acknowledging the caller’s perspective
Several informants performed well beyond expectations during their conversations with the dispatcher. Participant 26 (rated 3) described an incident in which he was talking to the EMCC while providing CPR, using a defibrillator, and simultaneously assisting the air ambulance in landing safely.
Participant 27 (rated 3) sat with the patient, a stranger that she had found on the street for almost an hour, even after he had threatened and waved her with a knife. He had collapsed on his way ‘to kill someone’ as he explicitly explained. The informant never considered leaving him because he needed help, and she did not want him to be a danger to anyone else, staying put until the police and paramedics arrived.
Participant 5 (rated 1), a registered nurse, described feeling pressured to drive her severely ill husband to the emergency room. She was certain that the patient was about to lose consciousness as his condition worsened. She was alone, driving through the city center during rush hour and combining the roles of driver and nurse. When asked if she ever considered stopping the car or calling 113 again, she replied, ‘ How could I? I had already talked to them, and they had made their decision clear. Furthermore, it was not possible as I was both driving and taking care of my husband.’ Participant 11 (rated 2) also felt pressured to drive her severely ill husband to the emergency room. In retrospect, she described this as a bad idea, as she was emotionally imbalanced, scared, and driving too fast, ‘I drove him myself, but I shouldn’t have [done so] because I was so scared, and drove so fast... ’.
In addition, the informants expressed great respect for the authority of the EMCCs. Participant 21 (rated 6) followed the EMCC operator’s advice to take a drug he knew could be potentially dangerous to him. He did not question this advice or inform the operator of his condition because he trusted his expertise. This information was an accidental finding because the informant was very satisfied with the call.
In addition to the workload of having someone in immediate distress (sometimes themselves), some callers felt stressed by EMCC operators’ numerous questions. Participant 31 (rated 3) said he became frustrated with answering several questions when he wanted to comfort the patient. Other informants, being alone at the time of the call, as it was for their sake, described finding the situation challenging and that, at the moment, even answering simple questions was a strain. Participant 21 (rated 6) said, ‘ All those questions when you’re very ill shouldn’t be necessary. (...) It’s not so easy when you’re feeling that ill .
Some patients were less available for excessive questioning than others. They may have been in the middle of an incredibly stressful situation or calling because of an emergency concerning themselves. What is even worse was that some questions were perceived as irrelevant. Understandably, the EMCC operators wanted to talk directly to the patients; however, several informants highlighted the importance of the dispatcher listening to them as the person’s next of kin. They felt pressured to hand the phone over to patients who were not in a state of taking care of or explaining themselves to them.
The informants also described the importance of explaining why the questions were asked, especially those that might have been perceived as irrelevant or unnecessary. Participants with negative experiences often described feelings of not being listened to. For instance, a dispatcher followed a prescribed list of questions rather than listening to or asking more relevant questions. Participant 31 (rated 3), ‘ When I asked, and requested a confirmation, that the ambulance was on its way, she said that it was not. Then she asked some questions that were, to me, meaningless .’
Positive and negative experiences with the EMCC operator
A majority of the informants expressed that the feeling of being taken seriously and listened to by the operator was the main reason for their satisfaction with the call; an operator who genuinely listened to their story made them feel supported and taken care of. Several participants answered the EMCC follow-up questions to confirm that the incident had been taken seriously.
Participants with positive experiences described feelings of cooperation and alliance with the dispatcher. For example, holding the line until an ambulance arrives, offering to transfer the call directly to the doctor ́s office, or having the doctor call the patient back are all perceived as positive elements of the 113 call. It is also important to give callers the feeling that they can change their minds and call back at any time if, for example, they regret agreeing to drive the patient to the emergency room instead of getting an ambulance.
Several informants explicitly confirmed the importance of the EMCC, confirming that it was the correct decision to call 113.
Participant 18 (rated 6) described the operator as a ‘ very nice lady,’ ‘She did great and asked all the right questions. I called, and we agreed on what I should do. So that was not a problem.’ She explained that the operator seemed to understand the situation and that the two of them, the operator and caller, cooperated to find a solution. When asked whether she felt she had the possibility to choose and decide, she answered ‘Yes, absolutely. She told me that if I wanted an ambulance, she would send it right away. It was my choice to be transported by my husband.’ The respondent further described that she genuinely felt that she could change her mind if she wanted to, as the operator had asked whether she was acceptable with that solution and that she had to feel safe about it. She described the operator’s calmness as a key factor in the success of this call.
We found that callers accept many solutions if they obtain proper explanations and information. When asked whether there was something, in particular, the informant remembered from the call and how the operator was interpreted, informant 20 (rated 5) answered that the operator was ‘pretty professional and comfortable.’ When asked about the meaning of ‘professional,’ the informant meant that ‘ She listened to what I said, took it seriously, acted upon it, and asked questions that were, in my opinion , relevant.’
In contrast, some callers did not think they had taken this seriously. Participant 13 (rated 2) explained, ‘ It seemed like I was seen as someone who was just joking.’ Furthermore, she described the dreadful feeling of the operator not believing in her and that she felt she had to argue for the help she needed.
Some informants experienced delayed assistance because of prejudice. For example, informant 27 (rated 3) sat with the patient for almost an hour, even after he had almost stabbed her with a knife, and did not consider leaving him. This also demonstrates the callers’ strong feelings of responsibility. Given this and being often emotionally affected, callers will do nearly anything that the EMCC would ask them to do. Several informants described the EMCC as the authority that they were reluctant to question. It is important to note that operators and their words have a significant impact on callers.
A common scenario in which prejudice interrupted communication was calling due to intoxication. Nearly all of the informants who called for a specific emergency felt that they were not taken seriously. Participant 22 (rated 1) was explicitly told by the EMCC that they did not believe in her because of several recent non-serious calls from other young people. Informant 10 (rated 2), when he was calling due to non-alcohol related injuries, felt judged by the fact that the incident happened on a Saturday evening; ‘ I felt that the attitude was “It is Saturday evening, and falling down some stairs...,” so there must be alcohol involved. I felt that they didn’t completely believe me.’
The informants who were dissatisfied with the call described the operator as uninterested, passive, oblivious, ignorant, arrogant, and even unprofessional or ‘tired of their job,’ as if the caller had disturbed the dispatcher by calling 113. Participant 16 (rated 2) said, ‘I felt like she was sitting there, rolling her eyes.
A considerable number of informants said that they felt belittled or sad after the conversation with the EMCC, describing feeling rejected even though they seemed reluctant to get an ambulance. In some cases, after having to argue for help, dissatisfied informants feared not getting help if they called for another time.
Consequences beyond the actual situation
Several informants stated that they blamed themselves for delayed or unfavorable medical assistance. They described that they might have been unclear about their communication or even made a bad impression. Participant 24 (rated 4) said, ‘I don’t know if I was unclear in my communication. I could have been sloppy and tired and not knowing exactly how to articulate myself, and that might have caused an inaccurate evaluation at the other end. But that shouldn’t be decisive for the outcome.’ The informants would replay the conversation in their minds, trying to find mistakes that they had made that caused medical assistance to be less optimal or delayed. Several participants expressed frustration and wondered what they needed to say the next time to get help.
Participant 31 (rated 3) said that he had decided if he were to call in the future. He would ask for an ambulance and then hang up, avoiding the risk of any delay if he was forced to answer many irrelevant questions. He actively planned for this alternative strategy, hoping to get more efficient help next time. Participant 24 (rated 4) described having intentionally “...learned medical terminology from his medical doctor sister to get the EMCC operator’s proper attention in the call.” and participant 15 (rated 2) stated that after several rejections, she wondered whether ‘ ...it was even worth calling 113 at all. ”
The callers want to give feedback to the EMCC
All informants expressed that they would gladly receive an SMS requesting feedback after a 113 call, as they were used to receiving similar SMS questionnaires after having contacted nearly all the other services. Several informants wanted to complain about their insufficient EMCC experience, but had no idea where to start. Therefore, they gave up on that thought, and several informants let it go because nobody died because of this phenomenon in the experiences described by the informants.
Additional unanticipated findings
Pandemic related issues
Several informants experienced delayed help due to COVID-19-related questions at the beginning of the conversation. For Informant 31 (rated 3), these questions were asked before more important questions concerning the patient’s vital signs. Informant 11 (rated 2) perceived being refused an ambulance due to the fear of COVID-19, similar to informant 30 (rated 4), who got the impression that “If the patient had COVID-19 symptoms, she would not get an ambulance.” An interesting finding was that neither of the informants expressed any concern regarding fearing contamination when having to meet medical personnel and environment. They experienced life-threatening situations, and desperately requested help. Their dissatisfaction was with the delayed help due to questions about Covid-19.
Paramedic’s behavior
Participant 10 (rated 2) was met by degrading comments from the paramedics, wanting to determine whether they were influenced by the unsympathetic EMCC operator. Participants 16 (rated 2) and 4 (rated 6) had heard paramedics explicitly say that there was no need for an ambulance during these incidents. In the case of Informant 16, this unpleasant comment from the professional paramedic became the last memory the patient had in her home before passing away a few days later.
Inter-agency coordination
Informants 17 (rated 2) and 26 (rated 3) both called 113 because of incidents that required the involvement of both the police and fire and rescue departments. After informing the EMCC about the situation, both perceived the conversation as unstructured and chaotic, especially considering that they had to repeat all the information when other emergency operators joined.
Video calls
Video has quite recently been introduced in the emergency medical services in Norway [ 13 ]. Participants 14 (rated 5) and 22 (rated 1) accepted the use of video options for the EMCC to better understand their situations. In these two incidents, the test project had a crucial impact, as the participants felt understood and believed in their despair. ‘But then they asked me to accept a video conference so I could film him and show them that this, in fact, was true. And how did it work? Well, they saw it, then said that they would come, and then came for him.’ (Participant 22). | Discussion
The SMS survey results showed that most callers were very satisfied with their conversations with the 113 operators. In addition, based solely on responses to the SMS survey, it is clear that people want to provide feedback on their experiences with this part of the healthcare service.
The interviews revealed surprisingly clear and consistent findings that were concordant in both the satisfied and dissatisfied contexts. First, the EMCC operator is expected to be highly conscious of all the factors affecting the caller and know that their words matter profoundly. This is a study on how human beings experience life situations, and rather extreme life situations. As these often are life-changing, the experience of receiving help or not in such circumstances are profound. In this respect, the interviews yielded clear findings, as discussed above.
As evident from this study, dispatchers must remember that every individual situation has its context, even when it shares similarities with other comparable situations. This highlights the importance of obtaining a correct and thorough understanding of each unique situation as soon as possible [ 14 – 16 ]. By carefully choosing words and trying to achieve a meaningful communication, e.g. using open-ended compared to close-ended questions, the EMCC operators quickly can gain the necessary information [ 17 ].
Møller and colleagues analyzed several thousand emergency calls, with the result that the most frequent category of a call was “unclear problem”, in addition most calls being deemed as urgent. Especially these two factors, even more when combined, showed the need to improve the support of the operators [ 18 ]. Our study supports the need for any additional tools provided for the operator to help them in challenging unclear and urgent situations. For instance, the use of video calls proved to be highly effective.
As Roivainen et. al showed in their observational pilot study in 2020, that proper telephone triage by nurses can reduce non-urgent EMS missions by one third [ 2 ]. This implies telephone counseling, care instructions and patient guidance to other services than EMS. Meaning that a significant amount of the situations the callers express the need of help for can be met by communicating with the EMCC operator. If the operator is highly conscious of how the callers are met, this study shows that a reduction in ambulance missions will not equal dissatisfied callers. Callers and patients that do not need urgent care can be treated equally well, by others means, as long as the operator communicates in a caring way.
After all, it is much more challenging to gain sufficient information over a telephone. Salk et al. described that there were evidently poorer agreements between assessments of the same patient in person, compared to over the phone [ 19 ]. This shows that it is essential that the operators constantly are aware of this barrier in providing proper help to the caller, including for instance at times listening more to the next of kin than to the patient. Lindström and colleagues reflected on the need to increase our understanding of how ill patients communicate with professionals, especially over the phone [ 20 ]. This study found that the threshold for calling is high; many fear they are a burden or being perceived as hysterical. These factors may cause that important pieces of information either are held back or not communicated at all. The callers might be overwhelmed by the situation, either as they are emotionally affected and/or overwhelmed by the tasks they are handling. Many also have a tremendous respect for the EMCC authority, affecting the communication.
It has also been found that barriers and opportunities related to the EMSS operators or the callers are the main factors influencing the assessments of the calls [ 20 ]. In addition, with both the barriers and opportunities, communication from the professional side was among the factors. A barrier was the structure of the call, and not focusing on additional information in the call, such as the caller’s breath. Then the main issue could be lost in the caller’s description of other, less severe symptoms. Another identified barrier was lack of structure in the call. Proper communication is always the professional’s responsibility. Then important information could be lost, due to several reasons. An opportunity was the operator’s use of different communication strategies such as closed loop communication. Meaning the operator repeating and/or concluding the information given by the caller in the same form, and the caller confirming or correcting the conclusions. This correlates with this study, finding that listening with genuine care and interest is essential. If the EMCC operator’s focus on a mindful interaction with the callers, taking them seriously, explaining properly, preferably establishing an alliance, it would be easier to ask correct follow-up questions to find the best conclusions for each individual caller. Holmström and colleagues emphasize the same factors [ 21 ]. They also highlight the challenges from the professional’s side in an emergency call. Of the identified themes of challenges generated in their study, specifically calls from third parties, and calls about unclear situations, were evidently challenging for several informants. This showed that calls operators find challenging, often are felt the same way for the actual callers. Lack of visual cues and knowledge about the patient, time pressure, and the fear of making mistakes, all are factors aggravating the operators’ situational awareness and ability for optimal performance. | Conclusion
Most callers seemed to be satisfied with the services provided from the EMCC. This study’s findings conclude that the result of the conversations, the most applicable for a 113 call being ‘getting an ambulance or not’ is not the main criterion determining whether a caller is satisfied with the EMCC service. What matters is how the dispatcher treats them. They want appropriate help. They might initially call for, and expect an ambulance, but our study demonstrates that they will accept and be content with a given solution as long as they feel listened to and taken seriously by a genuinely caring medical professional.
In every situation, the caller knows his or her situation the best. If a professional strives to establish an alliance with a caller, it will be beneficial for the patient and better solve the situation. We found that addressing the caller’s expectations by confirming that it was the right decision to call, seemed efficient in establishing an alliance between the caller and dispatcher.
Therefore, we recommend that the EMCC personnel must focus on communication as their most important tool, always explicitly assuring the caller that it was the correct decision to call 113 and establish an easily accessible method for providing feedback. Implementing SMS surveys as used in this study may be a standard procedure for exploring callers’ evaluation of EMCC services. This will be easily accessible to the caller and may function as real-time monitoring of user satisfaction. | Background
The Emergency Medical Communications Center (EMCC) is essential in emergencies and often represents the public’s first encounter with the healthcare system. Previous research has mainly focused on the dispatcher’s perspective. Therefore, there is a lack of insight into the callers’ perspectives, the attainment of which may contribute significantly to improving the quality of this vital public service. Most calls are now made from mobile phones, opening up novel approaches for obtaining caller feedback using tools such as short-message services (SMS). Thus, this study aims to obtain a better understanding of callers’ actual experiences and how they perceived their interaction with the EMCC.
Methods
A combination of quantitative and qualitative study methods was used. An SMS survey was sent to the mobile phone numbers of everyone who had contacted 113 during the last months. This was followed by 31 semi-structured interviews with people either satisfied or dissatisfied. Thematic analysis was used to investigate the interviews.
Results
We received 1680 (35%) responses to the SMS survey, sent to 4807 unique numbers. Most respondents (88%) were satisfied, evaluating their experience as 5 or 6 on a six-point scale, whereas 5% answered with 1 or 2. The interviews revealed that callers were in distress before calling 113. By actively listening and taking the caller seriously, and affirming that it was the right choice to call the emergency number, the EMCC make callers experience a feeling of help and satisfaction, regardless of whether an ambulance was dispatched to their location.
If callers did not feel taken seriously or listened to, they were less satisfied. A negative experience may lead to a higher distress threshold and an adjusted strategy before the caller makes contact 113 next time. Callers with positive experiences expressed more trust in the healthcare systems.
Conclusions
For the callers, the most important was being taken seriously and listened to. Additionally, they welcomed that dispatchers express empathy and affirm that callers made the right choice to call EMCC, as this positively affects communication with callers. The 113 calls aimed to cooperate in finding a solution to the caller’s problem.
Supplementary Information
The online version contains supplementary material available at 10.1186/s13049-023-01161-2.
Keywords
Open access funding provided by University of Bergen. | Limitations
The response rate was 35%, but relative to other SMS surveys, it is rather high. Other callers may have expressed opinions that were not reflected in the 31 interviews. The data from the satisfied callers proved to be surprisingly consistent, but due to available resources, it was impossible to contact all informants who agreed to participate in the interview. To obtain a balanced view, we intended to interview 50 persons, but ended up interviewing 31 respondents. We do not have any insight into reasons why other callers did not respond to the SMS, and do therefore not know all reasons behind the lack of feedback. Language barriers could be one such reason, but it was impossible to further research this.
We do not know anything about the callers who did not respond to the SMS. Anyway, the information received during these interviews was consistent, resulting in saturation. Another limitation is that the number of unsatisfied respondents was low, and the selection did not reflect the balance of responses in the first part of the study. Two researchers conducted all the interviews to reduce the risk of personal preferences influencing the results.
A challenging identified limitation is that the 31 informants were, based on what was possible to find out over the phone or the screen, quite a homogeneous group: native Norwegian men and women. One might wonder whether for instance non-native Norwegian did not answer, due to language barriers. This was not something we could research this time, but is definitely an interesting aspect. On the contrary, the 31 informants were a very diverse group when it came to gender and age. Other identification aspects were impossible to discover. This probably led to an unintended first-hand selection even before we started calling the informants.
A factor that would be interesting to look further into is the fact that EMCC operators are individuals of personal variabilities as it is likely that this may affect a caller’s experience. This study shows the importance of the human factor in these interactions, meaning that an operator’s personality is of significance, not only the medical expertise.
Supplementary Information
| Abbreviations
Emergency medical communication centre
Emergency medical dispatcher
Acknowledgements
We would like to express our deepest gratitude to Bergen EMCC for facilitating the identification of informants, The police Crisis and Hostage Negotiator Service (KGF), and the former dispatcher expert for significant knowledge and inspiration, and most importantly, to all our informants for sharing their honest experiences.
Author contributions
LM and GB contributed to the acquisition of data, and TBS and KAVN contributed to the data analysis. TBS, KAVN, OJS, LM, and GB contributed to the interpretation of the data. TBS and KAVN drafted the article. TBS, KAVN, OJS, LM, and GB critically revised the article for intellectual content. All authors made substantial contributions to the conception and study design, provided final approval of the version to be published, and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All authors attest to meeting the four ICMJE.org authorship criteria.
Funding
Open access funding provided by University of Bergen. Open-access funding was provided by the Norwegian National Advisory Unit on Emergency Medical Communication (KoKom) at Haukeland University Hospital, Bergen, Norway. The study was carried out by the authors under the terms of employment for the individual with their respective affiliations. The authors did not receive any external funding for this study.
Availability of data and materials
The dataset from this study is available from the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate
The study was approved by the Norwegian Centre for Research Data (NSD) (ref. number 223378) and by the Regional Committees for Medical and Health Research Ethics (REK, ref. no. 214838) before the project initiation. Participation was voluntary, and no health-related information or outcomes were collected as reasons for calling 113. Only one SMS was sent to each caller, hoping to limit the potential burden of people being reminded of the call, as it was potentially a stressful situation for them.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests. | CC BY | no | 2024-01-16 23:36:45 | Scand J Trauma Resusc Emerg Med. 2023 Dec 7; 31:94 | oa_package/82/ca/PMC10704617.tar.gz |
PMC10710729 | 38072936 | Introduction
Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) has garnered considerable scientific interest due to its advantageous features such as high throughput, rapid analysis, and robustness against high salt concentrations. However, the effective utilization of MALDI-MS for the analysis of low-molecular-weight (LMW) compounds has been hindered by certain limitations, including suppressed signal intensity of the analyte and illegibility of the MS spectrum [ 1 , 2 ]. Because the most widely used organic matrices like α-cyano-4-hydroxycinnamic acid (CHCA) and 9-aminoacridine (9-AA), generate matrix-related background in the MS region below m/z 700, interfering with the MS signal of LMW analytes [ 3 , 4 ]. Besides, the “sweet spot” phenomenon resulting from the heterogeneous co-crystallization leads to inferior reproducibility [ 5 , 6 ]. Unlike traditional MALDI-MS, surface-assisted laser desorption/ionization mass spectrometry (SALDI-MS) does not require any organic matrix, benefiting LMW compound detection with negligible matrix-related interference [ 7 ]. In the last decades, multifarious nanomaterials (e.g. porous silicon [ 8 , 9 ], carbon-based materials [ 10 – 12 ], metal oxides [ 13 – 16 ], metal-organic frameworks [ 17 , 18 ], and metal nanomaterials [ 19 , 20 ]) several techniques have been developed for the fabrication of SALDI nano-matrices. The efficacy of these nano-matrices has been extensively demonstrated in several applications of SALDI-MS. For instance, Pleskunov et al. reported niobium nanoparticles based SALDI-MS method for multiple phospholipids analysis on mouse brain tissue [ 21 ]. Minhas et al. prepared nanostructured silicon SALDI-MS to detect anabolic doping agents and their metabolites in saliva and urine [ 22 ]. Chu et al. fabricated a metal nanoparticle-based sandwich immunosorbent sensing platform for the SALDI-MS detection of viruses and viral nonstructural proteins [ 23 ]. For the detection of p-phenylenediamine, Peng et al. introduced a novel magnetic single-layer nano-MXene that could function concurrently as an enrichment material and SALDI matrix [ 24 ]. Despite the advancements made in SALDI nano-matrices, certain limitations persist in disruptive carbon or metal clusters, unintended surface fouling, inadequate physicochemical stability, and indistinct alkali adduct ions. Therefore, investigating a novel nano-matrix with features of low background, good stability, and high ionization efficiency is still of great value. Recently, the evidentiary significance of fingerprint recognition utilizing minutiae and the chemical composition of endogenous/exogenous chemicals has been substantiated [ 25 , 26 ]. Such chemical information provides personal details like dietary habit, physical condition, and living style [ 27 , 28 ]. Furthermore, determining the age of the fingerprint holds forensic importance. The identification of the age of fingerprints can achieve the determination of the temporal scope of criminal cases [ 29 , 30 ]. However, there was a rare approach to determine the chemicals and fingerprint age simultaneously.
In this research, we synthesized samarium-doped indium vanadate nanosheets (IVONSs:Sm) as a novel nano-matrix for the negative-ion LDI-MS analysis of LMW molecules. The as-synthesized IVONSs:Sm exhibited properties like enhanced optical absorption, high charge mobility, and a large surface area, which not only contributed to an efficient absorption and transfer of laser energy but also facilitated the deprotonation of the analytes. Generally speaking, better performance in analyzing of LMW compounds could be obtained in negative-ion mode without multiple adduct peaks [ 31 ], hence IVONSs:Sm-assisted LDI-MS in negative-ion mode was endowed with enhanced MS signal and legible MS spectrum. Consequently, IVONSs:Sm-assisted LDI-MS with good sensitivity and repeatability was successfully applied in fingerprint analysis. Along with the ridge pattern, chemicals including endogenous (e.g., fatty acids) and exogenous (e.g., drug residues) molecules on fingerprints could be identified via the IVONSs:Sm-assisted LDI-MS imaging technology. Moreover, additional trials investigating the determination of fingerprint age and the assessment of biomarkers related to hepatic injury have demonstrated the potential efficacy of this method as a viable option in both forensic and clinical contexts. | Materials & methods
Chemicals
All starting materials, unless mentioned otherwise, were obtained from commercial suppliers and used directly. Ammonium metavanadate (NH 4 VO 3 , 99.95%), hexadecylpyridinium bromide (CDPB, 96%), lauric acid (LA, analytical standard), myristic acid (MA, analytical standard), palmitoleic acid (PA, analytical standard), and oleic acid (OA, analytical standard), n-butanol (C 4 H 10 O, 99%), n-octane (C 4 H 10 O, 96%), isopropanol (C 3 H 7 OH, 99.7%), nitric acid (HNO 3 , 70%), and acetonitrile (CH 3 CN, HPLC grade) were purchased from Aladdin Reagent Co., Ltd. (Shanghai, China). Indium nitrate (In(NO 3 ) 3 , 99.99%), samarium nitrate (Sm(NO 3 ) 3 , 99.99%), potassium thiocyanate (KSCN, 99.95%) were purchased from Macklin Reagent Co., Ltd. (Shanghai, China). Methanol (CH 3 OH, anhydrous), ethanol (C 2 H 5 OH, anhydrous) were purchased from Sinopharm Chemical Reagent Co., Ltd. (Shanghai, China). α-Cyano-4-hydroxycinnamic acid (CHCA, 99%) and 9-aminoacridine (9-AA, 97%) were obtained from Sigma-Aldrich Reagent, Co., Ltd. (St. Louis, USA). Bromoethane (C 2 H 5 Br, 99%) was provided by Fuyu Chemical Reagent Factory (Tianjin, China). Chemically converted graphene (thickness 0.8–1.2 nm) and nano-sized CeO 2 (diameter 4–8 nm) were purchased from Ruixi Biological Technology Co., Ltd. (Xi’an, China). Hand lotion (Blue Moon, Co., Ltd. Guangdong, China) and acne cream (Liangfu Pharmaceutical Co., Ltd. Shandong, China) used in exogenous LMW compounds determination were purchased from a local supermarket and pharmacy. Ultrapure water (≥ 18.2 MΩ•cm) produced by an Aqua-pro water system (Aquapro, Chongqing, China) was used throughout the experiment.
Apparatus
TEM images of the nano-matrices, including IVONSs:Sm, graphene and CeO 2 , were collected by transmission electron microscope (FEI Tecnai F20) at 200 kV. XRD patterns were recorded by powder X-ray diffractometer (D8 Advance Bruker) from 10° to 80° (4°/min) using Cu-Ka radiation (λ = 0.1542 nm). Diffuse reflectance UV-visible spectra were acquired on UV-vis spectrometer (PerkinElmer UV3600). XPS spectra were measured using X-ray photoelectron spectrometer (Thermo ESCALAB 250XI), which was calibrated based on the binding energy of the C 1s peak at 284.8 eV. Vanadium element analysis of IVONSs:Sm deposited on the copper conductive tape were qualitatively measured by ICP-MS (Agilent 8900) with microwave digestion system. SEM observation of fingerprint samples were performed on scanning electron microscopy (ZEISS Sigma 300) using 15 kV beam energy. AFM and SPV images of indium vanadate nanosheets were taken on atomic force microscope (Bruker Dimension ICON). Images were acquired at scan rates of 1.0 Hz, and the tip lift height was 120 nm at tapping mode for potential mapping. The SPV images were derived from the changes of contact potential difference before and after UV light irradiation. The photothermal effect of the nano-matrices on glass slides was performed under 405 nm laser irradiation (2 W/cm 2 ), the infrared thermographic images and temperature were determined with an infrared thermal imaging camera. The hydrophilicity/hydrophobicity of the nano-matrices was analyzed by measuring the aqueous contact angle using optical contact angle measuring and contour analysis systems (Dataphysics OCA20). Determination of fatty acids in fingerprint samples was carried out in HPLC-ESI-MS (Agilent 1290 Infinity/6460 Triple Quadrupole). All the MALDI-MS and tandem MS experiments were carried out on a mass spectrometer (Bruker Ultraflextreme MALDI TOF/TOF) with neodymium-doped yttrium aluminium garnet solid-state laser (Nd:YAG laser, 355 nm wavelength) in positive- or negative-ion mode.
Synthesis of IVONSs:Sm
The microemulsion-assisted solvothermal method was employed to synthesize IVONSs:Sm and pure IVONSs. In a typical procedure for IVONSs:Sm synthesis, two types of microemulsion (ME In and ME V ) with different aqueous phases were prepared. Both ME In and ME V contained 3.5 mmol of emulsifier, CPDB, 25 mL of co-emulsifier, n-butanol, and 100 mL of the oil phase, n-octane. For the ME In system, the microemulsion was vigorously magnetically stirred for 150 min after being mixed with 5 mL of an aqueous solution containing indium nitrate (50 mM) and samarium nitrate (2.5 mM). Meanwhile, the aqueous solution of ammonium metavanadate (5.5 mL, 50 mM) was added to the ME V system under continuous stirring until the microemulsion was homogeneous. Afterwards, the ME In system was added dropwise into the ME V system with stirring magnetically for 2 h at room temperature, achieved by the addition of nitric acid (2 M) to adjust the pH value to 1–2. The mixed microemulsion was transferred into a Teflon-lined stainless-steel autoclave for solvothermal treatment at 170 °C for 20 h, followed by cooling to room temperature naturally. Finally, IVONSs:Sm were obtained after centrifugation and washed with deionized water and ethanol. For pure IVONSs synthesis, the synthetic procedure was similar except for the removal of the samarium nitrate reagent.
Mass spectroscopy with nano-matrices and organic matrices
In the preliminary application test, we used a mixture of fatty acids as model LMW molecules to evaluate the SALDI-MS performance of the nano-matrices (i.e. IVONSs:Sm, graphene, CeO 2 ) and traditional organic matrices (i.e. CHCA, 9-AA). Here, four fatty acids were dissolved in methanol (1000 μM) and subsequently diluted to a specified concentration as required for specific experiments. The organic matrix CHCA (10 mg/mL) was dissolved in CH 3 CN/water (2:1, v/v) containing 0.1% trifluoroacetic acid. 9-AA was prepared at 10 mg/mL in CH 3 CN/water (1:1, v/v). The IVONSs:Sm and pure IVONSs suspensions were prepared in ethanol/H 2 O (3:1, v/v) and then sonicated for 10 min, forming a nano-matrix suspension with a concentration of 1.0 mg/mL. The suspension of nano-matrix graphene or CeO 2 was prepared according to previous literature. Specifically, graphene was dispersed in ethanol at 1.0 mg/mL with the aid of ultrasonication, and CeO 2 was dispersed in isopropanol and formed a homogeneous suspension at a concentration of 25 mg/mL. A total of 1 μL of matrix suspension was dropped onto the ground-steel sample target with 384 spots and then air-dried, followed by 1 μL of analyte solution at the specified concentration being deposited on top of the matrix. Additionally, the backgrounds of matrices of CHCA, 9-AA, and IVONSs:Sm were also collected under identical mass spectrometry conditions. For the IVONSs:Sm-assisted LDI-MS spectra of LA and OA mixtures in the presence of KSCN, different concentrations of KSCN were blended with the isovolumetric IVONSs:Sm suspensions beforehand. All the mass spectrometric data were acquired using the Bruker Ultraflextreme MALDI TOF/TOF MS equipped with a Smart-Beam II Nd:YAG laser (355 nm wavelength, 50 μm laser spot, 2000 Hz), either in the reflective positive- or negative-ion mode. Each MS spectrum was acquired as an accumulation of 1000 laser shots. Mass calibrations were performed externally using the MS peaks of CHCA for LMW molecule analysis. The laser irradiation power was controlled with an attenuation filter. The laser fluence was shown as a percentage of the laser output and set at 75% laser output for LMW compounds analysis, except where noted. The MALDI-MS instrument was controlled via the flexControl software (Bruker Daltonics, Inc., Germany). MS spectra and images were processed via the Bruker Daltonics flexAnalysis software.
Fingerprint samples preparation and analysis
All the fingerprint samples were volunteered by laboratory members. The fingerprints were laid onto the surfaces of the windshield (glass), ID card (plastic), and knife (metal) for 5 s after rubbing the fingers on their forehead once. Fresh fingerprints were immediately sprayed with IVONSs:Sm in bromoethane (5 mg/mL) using a handheld electronic sprayer (NVisions Co., Ltd., China) with a 10 cm spacing distance. The flow rate of spraying was 0.01 mL/s, and the spraying time of 30 s was used for the nano-matrix deposition. A nitrogen-blow procedure was used after 180 s of drying at room temperature to detach the loosely sorbed IVONS:Sm. Subsequently, a double-sided copper foil tape was uniformly affixed to the nano-matrix-deposited fingerprint sample for 15 min. The double-sided copper foil tape was stripped from the surface of the fingerprint samples and tailored to an appropriate size, then adhesively stuck to an indium tin oxide-coated glass slide for subsequent SALDI-MS analysis. In addition, the surface microtopography and elemental analysis of extracted fingerprints were carried out using scanning electron microscopy (ZEISS Sigma 300) equipped with X-ray spectrometer attachments after the metal sputtering-treated procedure. For fingerprint aging time determination, fingerprint samples were stored in the constant temperature and humidity test chamber (YP-150GSP, Taisite Co., Ltd., China) for various time points to simulate typical ambient conditions. The temperature, relative humidity, and lighting conditions were controlled and monitored. The fingerprint age of the simulated fingerprint samples was predicted based on the equation as follows: T = (t OA + t AA + t TA )/3, where T stood for the fingerprint age, and t OA , t AA , and t TA referred to the aging time of the three specific analytes, oleic acid (OA), ascorbic acid (AA), and threonic acid (TA), respectively. The aging time of each specific analyte was respectively and semi-quantitatively evaluated from the experimental fit curves between time-dependent MS intensity ratios (R OA , R AA , R TA ) and storage periods. To detect exogenous LMW compounds on fingerprints, fingerprint samples were collected after washing hands with lotion or rubbing the fingertips on volunteers’ faces coated with acne cream. IVONSs:Sm-assisted LDI-MS imaging data were acquired using the Bruker Ultraflextreme MALDI TOF/TOF MS equipped with a Smart-Beam II Nd:YAG laser (355 nm wavelength, 50 μm laser spot, 2000 Hz) in negative-ion mode. The data acquisition was performed at the mass range of m/z 100–800 using a spatial resolution of 80 μm and collecting a total of 500 laser shots per pixel. All the chemical maps of LMW molecules were reconstructed and visualized using the Bruker Daltonics flexImaging software. The confirmation of LMW compounds was achieved by matching the molecular weight in the online database ( https://webbook.nist.gov/chemistry/ ) and previously published results [ 32 – 36 ]. | Results and discussion
Characterization and LDI-MS performance of IVONS:Sm
As depicted in Fig. 1 a, the preparation of IVONSs:Sm was carried out using the microemulsion-mediated solvothermal method. Transmission electron microscopy (TEM) analysis revealed that the as-synthesized IVONSs:Sm were of translucence and corrugation, indicative of a sheet-like nanostructure which was conducive to the adsorption of LMW molecules (Fig. 1 b). A lattice spacing of 0.270 nm in high resolution TEM (HRTEM) image was indexed to (112) plane of orthorhombic InVO 4 (Fig. 1 c). Additionally, the selected area electron diffraction (SAED) pattern confirmed the high degree of crystallinity of IVONSs:Sm (inset, Fig. 1 c). Elemental analysis using energy dispersive spectrum (EDS) and EDS mapping further verified the presence and uniform distribution of In, V, O, and Sm elements in IVONSs:Sm, with semiquantitative results closely aligning with the theoretical compositions (Figure S1 a, b). Figure 1 d exhibited the powder X-ray diffraction (XRD) patterns of IVONSs:Sm, the sharp diffraction peaks also revealed the good crystallinity of IVONSs:Sm and matched well with the standard orthorhombic-phase InVO 4 (JCPDS No. 48–0898). It is noteworthy that Sm doping had no effect on the characteristic diffraction pattern of InVO 4 , and there was no impurity peak concerned with In 2 O 3 , V 2 O 5 or other species. Beyond that, survey X-ray photoelectron spectroscopy (XPS) analysis confirmed the elements of In, V, O, and Sm without impurities (Fig. 1 e). Additionally, high-resolution XPS spectra of In 3d, V 2p, O 1s, and Sm 3d provided detailed insights into the chemical composition and bonding states of the IVONSs:Sm (Fig. 1 f ~ i). The high-resolved In 3d spectrum was characterized by an In 3d 5/2 peak at 444.3 eV and an In 3d 3/2 peak at 452.1 eV (Fig. 1 f). For the V 2p spectrum, the fitted peaks at 516.9 eV and 525.0 eV were associated with V 2p 3/2 and V 2p 1/2 of V 5+ , respectively. While peaks located at 515.2 eV and 523.3 eV were assigned to V 2p orbitals of V 4+ , indicating V 5+ might obtain electrons from nearby oxygen vacancies (Fig. 1 g). As expected, O 1s spectrum revealed three deconvoluted peaks at 530.3, 532.1 and 533.6 eV, representing lattice oxygen (O L ), vacancy oxygen (O V ) and chemisorbed oxygen (O OH ), respectively (Fig. 1 h). In the components of Sm 3d peak, two peaks fitted at 1083.5 eV and 1110.9 eV were corresponded to Sm 3d 5/2 and Sm 3d 3/2 orbitals, respectively (Fig. 1 i). Collectively, these characterization results provided evidence of the successful synthesis of IVONSs:Sm.
In order to evaluate the performance of IVONSs:Sm as a nano-matrix for LDI-MS analysis, the four most common fatty acids in organisms - lauric acid (LA), myristic acid (MA), palmitoleic acid (PA), and oleic acid (OA) - were selected as model analytes and analyzed with IVONSs:Sm, as well as traditional organic matrices (CHCA and 9-AA) in both positive- and negative-ion modes. When CHCA was utilized in the positive-ion mode, the cationic adducts and fragments of CHCA were found to be predominant, inhibiting the MS signals of the four analyte ions, including [M + H] + and [M + Na] + , under these background signals (Figure S2 a, Table S2 ). Additionally, Figure S2 b also revealed a suppressive MS signal of the four fatty acids, indicating the inapplicability of 9-AA in positive-ion MALDI-MS analysis. As shown in Figure S2 c, multiple positive-ion signals of fatty acids could be detected with less interference peak when IVONSs:Sm was used as a nano-matrix. However, it is noteworthy that accompanying the quasi-molecular ion peaks of fatty acids were the alkali and double alkali adduct ions of the analytes (Table S2 ), rendering the mass spectrum in positive-ion mode particularly complicated to interpret. These results imply that it may not be a good option to analyze fatty acids in the positive-ion mode. On the other hand, Fig. 2 a and b demonstrated the MS spectra of the four fatty acids in negative-ion mode using CHCA and 9-AA as matrices. For CHCA, the matrix-related ions ([M − CO 2 − H] − at m/z 144.0 and [M − H] − at m/z 188.1) dominated, but suppressive analyte signals were observed. In addition, improved MS signal intensity and lower missing percentage of fatty acids were observed for the 9-AA matrix (Table S2 ). Nevertheless, a higher intensity of intrinsic matrix-related ion ([M-H] − at m/z 193.1) still suppressed the MS signal of fatty acids. In contrast, an interference-free MS spectrum with deprotonated [M − H] − ions of LA, MA, PA, and OA at m/z 199.2, 227.2, 253.2, and 281.2 was obtained in the range of m/z 150–600 with IVONSs:Sm as the nano-matrix (Fig. 2 c), implying that IVONSs:Sm were more applicable in facilitating negative ionization for fatty acids. As a control, the performance of the three matrices without analytes was also evaluated (Figure S3 ). The results indicated that backgrounds with several intrinsic matrix-related peaks were observed for those using CHCA and 9-AA as matrices in different ionization modes, while less background interference for IVONSs:Sm ranged from m/z 150 to m/z 600, especially in negative-ion mode, further demonstrating the potential of IVONSs:Sm as a stable nano-matrix for LDI-MS analysis.
Furthermore, the study evaluated the performance of LDI-MS using IVONSs:Sm in contrast to previously reported nano-matrices such as graphene and metal oxide (e.g., cerium oxide (CeO 2 )), which have been recognized for their superiority in SALDI-MS analysis of LWM compounds (Figure S4 a, b) [ 34 , 37 ]. Figure 2 c, d revealed that MS signals of the four fatty acids could be distinctly detectable as [M − H] − using the three nano-matrices. However, the higher analyte signals for IVONSs:Sm suggested its superiority over graphene and CeO 2 in detection sensitivity (Fig. 2 e), which could be ascribed to a synergistic effect of stronger UV absorption, higher photothermal capability of IVONSs:Sm, and potentially effective interactions between IVONSs:Sm and surface LMW compounds. As shown in Figure S4 c, UV-vis absorption spectra of the three nano-matrices with the same dispersing concentration were investigated. A stronger absorption of IVONSs:Sm than other nano-matrices at the operational wavelength (355 nm) made it possible to be utilized as a benign receptor of laser energy in the LDI-MS process. Additionally, the infrared thermographic images and photothermal heating curves of the three nano-matrices indicated that IVONSs:Sm had more advantageous photothermal conversion compared to the other two conventional nano-matrices, which made it effective to transfer the absorbed laser energy and enhance the ionization of analytes (Figure S5 ). With this in mind, it could be an optimal choice to analyze LMW compounds with IVONSs:Sm as a nano-matrix. Furthermore, Fig. 2 e and Figure S6 suggested that the incorporation of Sm 3+ into IVONSs could be synergistically contributive to the negative ionization process. Significantly, the MS spectrum with the same characteristic peaks of the four fatty acids but insufficient MS intensities was obtained by using IVONSs without samarium doping. This outcome demonstrated the superior feasibility of IVONSs:Sm over pure IVONSs in the analysis of fatty acids. Accordingly, the optimal dopant amount was investigated, and Fig. 2 f indicated that the MS signal intensities of the four fatty acids reached their maximum at 5.0 mol % samarium, suggesting an optimal samarium molar ratio for IVONSs:Sm.
Mechanistic basis of IVONSs:Sm-assisted LDI-MS
While the precise LDI mechanism remains to be conclusively established, numerous studies have provided evidence supporting the photoexcitation and electronic transition mechanism induced by laser in the ionization process [ 38 ]. Hence, the enhancement of MS signals with IVONSs:Sm could also be elucidated by the electronic transition mechanism for the negative ionization of LMW molecules. As depicted in Fig. 2 g (inset), the input laser energy could be transduced by generation of electron-hole pairs in IVONSs:Sm, where the excited electrons could be emitted from IVONSs:Sm and subsequently interact with LMW molecules, facilitating the negative ionization of the analyte [M − H] − . To validate this mechanistic basis, potassium thiocyanate was employed as a hole-scavenger to investigate the correlation between the generation of electron-hole pairs and the LDI process of LMW molecules [ 38 ]. Figure 2 g demonstrated decreasing deprotonated ion peaks of of LA (m/z 199.2) and OA (m/z 281.2) with an increase in KSCN concentration (0, 5, 10, 20 μmol/mL), showing a distinct suppression of the MS signal of the two fatty acids in the presence of higher KSCN concentration, thus experimentally supporting the electronic transition mechanism in Fig. 2 g [ 39 ]. Consequently, the electronic structure and optoelectronic properties of IVONSs:Sm should be examined. Figure 2 h, i and Figure S7 presented the electron density analysis of IVONSs and IVONSs:Sm on the crystal face (100). In comparison to pure IVONSs (Figure S7 ), the distinct electron cloud overlapping in IVONSs:Sm confirmed the coexistence state of electrovalent bond and covalent bond in samarium-oxygen polarity linkages (Fig. 2 h). Accordingly, the differential electron density diagram indicated that the electron density of O 2- adjacent to Sm 3+ increased (Fig. 2 i). Thus, the octahedron formed by Sm 3+ -O 2- bond caused the enhancement of the electrical dipole moment in the lattice, which was beneficial for the diffusion of the charge carrier. On the other hand, Figure S8 a depicted the diffuse reflectance ultraviolet-visible spectra of IVONSs:Sm and pure IVONSs. Notably, the strong absorption bands ranging from 200 to 400 nm were mainly attributed to the charge transfer effect between V 5+ –O 2- and f-f transitions of Sm 3+ (Figure S8 b) [ 40 , 41 ], implying that IVONSs:Sm predominantly absorbed laser power (e.g., 355 nm Nd:YAG laser) in the MALDI-MS instrument, thus allowing for the subsequent ionization process. According to the Tauc equation, the corresponding band gaps of IVONSs:Sm and pure IVONSs based on the optical absorption were calculated to be 2.19 and 2.28 eV, respectively (Fig. 3 a). The lower band gap of IVONSs:Sm than pure IVONSs was further supported by density functional theory (DFT) simulation (Fig. 3 b, c), where the DFT-derived band gap of pure IVONSs aligned with the experimental data. For IVONSs:Sm, the direct band gap of IVONSs:Sm was slightly smaller than the experimental value, as DFT simulation tends to underestimate the band gap [ 42 ]. The density of states (DOS) results of IVONSs indicated that the conduction band was predominantly composed of V 3d states (Figure S9 a), while the V 3d states hybridized with the Sm 4f states, forming the conduction band of IVONSs:Sm (Fig. 3 d). This suggested that the Sm 4f states would shift the conduction band towards the lower energy, thus reducing the band gap for IVONSs:Sm. The experimental and calculation results consistently demonstrated the decrease in the IVONSs band gap with Sm ion dopant. Importantly, the lower band gap of IVONSs:Sm promoted the electronic transition and charge carrier generation, which was conducive to the negative ionization of analytes as per the mechanism illustrated above.
In addition, Fig. 3 e ~ h depicted atomic force microscope (AFM) and surface photovoltage (SPV) images of the two vanadate nanosheets, providing a spatial visualization of the generation and transport of charge carriers at the nanoscale. The AFM images of IVONSs:Sm and pure IVONSs, with similar lateral dimensions and thickness of approximately 5 nm, were presented in Fig. 3 e and g. However, the corresponding SPV images, obtained by subtracting the potentials under dark conditions, revealed that IVONSs:Sm exhibited a more negative light-induced potential change of approximately 30 mV compared to pure IVONSs. This indicated a higher accumulation of electrons and greater mobility of charge carriers from the bulk phase to the surface of IVONSs:Sm, which facilitated energy transfer from the nanomaterial to LMW molecules, consequently increasing the efficiency of negative ionization in LDI-MS. To gain insight into the role of the nanomaterial IVONSs:Sm in the LDI process, we further investigated the thermodynamic profile of adsorption and dissociation of analytes on the two vanadate nanosheets using DFT simulations. As a representative LMW molecule, the electrostatic potential (ESP) of OA was initially calculated based on the ground state electron density (Figure S9 b). The carboxyl group was found to be favorable for OA (RCOOH) to be adsorbed onto the vanadate nanosheets (RCOOH*) due to its maximum ESP. Consequently, the acidic hydrogen in the carboxyl terminal tended to transfer from OA to the surface lattice oxygen, liberating the negative ion of OA (RCOO − ). Figure 3 i demonstrated that the Gibbs free energies of each step were reduced on IVONSs:Sm relative to pure IVONSs, indicating that the LDI process on the surfaces of IVONSs:Sm was thermodynamically favorable. Taking all of the above into consideration, the improved performance of IVONSs:Sm in nanomaterial-assisted LDI could be elucidated as a synergistic effect of various factors, including optical absorption and charge carrier mobility.
IVONSs:Sm-assisted LDI-MS analysis of fatty acids
Due to its superior performance, IVONSs:Sm was determined to be a highly effective nano-matrix for in situ detection of LMW compounds in authentic samples, such as fingerprints. A lifting process using double-sided conductive copper foil tape was performed to retrieve fingerprints from surfaces. The exceptional ability of IVONSs:Sm in negatively ionizing LMW compounds on the copper conductive tape was also verified (Figure S10 ). To assess the reproducibility of IVONSs:Sm-assisted LDI-MS in negative-ion mode, the mass spectrometry signal intensities of representative LA (saturated fatty acid) and OA (unsaturated fatty acid) from fifteen randomly selected positions within a single spot (diameter ~ 2.1 mm) of analytes spotted on the copper conductive tape were collected. As shown in Fig. 3 j, relatively stable MS signals for LA and OA were observed, and a low relative standard deviation for each acquisition confirmed the high shot-to-shot reproducibility of the IVONSs:Sm-assisted LDI-MS approach (Fig. 3 k). In comparison, the MS signal intensities for LA and OA showed significant fluctuations with an RSD of ~ 20% for each acquisition (Fig. 3 j), indicating unsatisfactory repeatability when employing the traditional organic matrix 9-AA. The good shot-to-shot reproducibility was mainly attributed to the difference in the homogeneity of IVONSs:Sm and 9-AA deposited on the copper conductive tape. Figure S11 demonstrated the crystallization of IVONSs:Sm and two types of traditional organic matrices (CHCA and 9-AA) obtained in a MALDI-MS spectrometer, in which a stark difference of matrix distribution between IVONSs:Sm and other two organic matrices could be witnessed. It could be seen that IVONSs:Sm spread more uniformly after the solvent evaporation, instead of exhibiting the “sweet spot” effect of CHCA and 9-AA on the target surface. The good shot-to-shot reproducibility effectively solved the variability of signal intensity, providing assurance of reliable quantitative analysis of analytes using IVONSs:Sm. Fig. 3 L demonstrated that the MS peaks of the two deprotonated fatty acids [M − H] − gradually raised with the increasing analyte concentrations. The MS responses for LA at m/z 199.2 and OA at m/z 281.2 were proportional to the concentrations of the target analytes (Figure S13 a), and the detection limit (LOD) investigations of LA and OA corroborated the sensitivity of the IVONSs:Sm-assisted LDI-MS approach (8.2 μM for LA and 11.6 μM for OA). As shown in Figure S12 , a quantitative analysis of residual LA and OA in real fingerprint samples was further performed using high-performance liquid chromatography-electrospray ionization mass spectrometry (HPLC-ESI-MS). The residual quantities of LA and OA in a fingerprint sample were determined to be 0.36 and 2.72 μg, respectively. Taking the LODs and added volume (1 μL) of fatty acid standard solutions into account, the proposed IVONSs:Sm-assisted LDI-MS approach was capable of detecting 1.6 × 10 − 3 μg of LA and 3.3 × 10 − 3 μg of OA in real samples, which were substantially lower than those detected by HPLC-ESI-MS. Therefore, the quantitative results supported that IVONSs:Sm-assisted LDI-MS could satisfy the analytical demand of fatty acid levels in fingerprints. Given that a high salt concentration might suppress the ionization process of the analyte, the salt tolerance was evaluated by adding different concentrations of NaCl (0 ~ 500 mM) in IVONSs:Sm-assisted LDI-MS detection of two representative fatty acids. Figure 3 m showed that the addition of 10–500 mM NaCl slightly decreased the signal intensities of LA and OA, confirming a good tolerance of the nano-matrix IVONSs:Sm. Additionally, it was noted that the characteristic MS peaks of the LA and OA mixture using IVONSs:Sm with a two-month storage matched those using freshly prepared IVONSs:Sm (Fig. 3 n), and there was insignificant divergence in the pattern and intensity of the MS peaks after multiple laser shots (Figure S13 b, c). Hence, the stability of IVONSs:Sm, including the photostability of analytes during IVONSs:Sm-assisted LDI-MS analysis, could be experimentally ascertained.
IVONSs:Sm-assisted LDI-MS imaging of fingerprints
Based on the aforementioned study, the applicability of IVONSs:Sm for detecting LMW compounds adhered to fingerprints was preliminarily investigated. IVONSs:Sm were dispersed in bromoethane with a low boiling point and then sprayed on the fingerprint section through an atomizer. Following the deposition of IVONSs:Sm, the fingerprint was lifted using copper conductive tape. Scanning electron microscopy (SEM) images and results from EDS analysis confirmed the concentration of IVONSs:Sm on the ridges of the fingerprint after the disperse medium evaporation (Fig. 4 a, b). To collect IVONSs:Sm-assisted LDI-MS spectra of the fingerprints, the copper conductive tape containing the extracted fingerprint was affixed to an indium tin oxide-coated glass slide for IVONSs:Sm-assisted LDI-MS analysis. Figure S14 a presented the positive MS profile obtained from the extracted fingerprint, which showed the MS spectrum of an endogenous mixture containing PA, hexadecanoic acid (HA), OA, and stearic acid (SA). The abundance of MS peaks confirmed that the chemical information of the fingerprint has been transferred onto the substrate of the copper conductive tape. Apart from the fatty acids, other ion peaks originated from other endogenous compounds were also recorded in the range m/z 100–700. For example, the peaks detected at m/z 147.1 and 177.1 were assigned to the [M + H] + ion of lysine (Lys) and ascorbic acid (AA), respectively. In contrast, the MS spectrum of the extracted fingerprint obtained in negative-ion mode exhibited a relatively concise signal with long-term stability in high vacuum (Fig. 4 c and Figure S14 b ~ d). Unlike the multiple peaks for one analyte under positive-ion mode, all the fatty acids were clearly detected as the only deprotonated [M − H] − ions with the assistance of IVONSs:Sm. Additionally, intense ions in the fingerprint were putatively identified via MALDI LIFT-TOF/TOF MS (Figure S15 , S16 , and Table S3 ). Furthermore, considering the coexistent lipids in fingerprints, as well as the potential ester bond cleavage in the case of IVONSs:Sm, a controlled experiment was performed and indicated a relatively stable MS signal of two model fatty acids in the existence of two typical lipids (glycerol trimyristate and glycerol tripalmitate), which excluded the interference of lipids on subsequent fatty acid analysis (Figure S17 ). All the results suggested the preponderance and feasibility of negative-ion IVONSs:Sm-assisted LDI-MS for spatial molecular profiling in fingerprints.
Since IVONSs:Sm allowed for the MS analysis of chemical species contained within the fingerprint, IVONSs:Sm-assisted LDI-MS imaging of the fingerprint could be generated from the spatial distribution of the MS signals of selected deprotonated analytes. Considering the spraying conditions of IVONSs:Sm could affect the loading amount and homogeneous coverage of the nano-matrix on the fingerprint, critical experimental parameters (including dispersing concentration of IVONSs:Sm and spraying time) were optimized. For the sake of simplicity, HA was taken as the representative fatty acid for IVONSs:Sm-assisted LDI-MS analysis in negative-ion mode. As shown in Figure S18 a, the vanadium content on the substrate of the copper conductive tape was determined by inductively coupled plasma mass spectrometry (ICP-MS), which implied that the loading amount of IVONSs:Sm on the fingerprint gradually increased with the increase of IVONSs:Sm concentration (Figure S18 b). As a consequence, the MS signal of HA at m/z 255.2 went up with the dispersing concentration of IVONSs:Sm. Likewise, the spraying time was optimized, and the results were presented in Figure S18 c. The incremental vanadium content and the MS signal with the spraying time of IVONSs:Sm suspension further confirmed that adequate IVONSs:Sm deposition was beneficial to the enhancement of the MS signal. Aside from the evaluation of spraying conditions in light of MS signal intensity of the analyte, the performance of IVONSs:Sm-assisted LDI-MS imaging could also depend on parameters like the dispersing concentration of IVONSs:Sm and spraying time. It could be found that the MS images with higher image clarity and contrast were obtained with the increasing concentration of IVONSs:Sm (Figure S19 a ~ c). Figure S19 d ~ f indicated that the MS images of the fingerprint gradually blurred when the spraying time increased to 60 s, suggesting that the prolonged deposition procedure of IVONSs:Sm resulted in an undesirable MS image without the original minutiae of the fingerprint pattern. This phenomenon might be ascribed to the factor that extended and continuous exposure of sprayed bromoethane droplets could thoroughly wet the fingerprint area, causing the delocalization and diffusion of hydrophobic fatty acids, thus the corresponding MS image of the fingerprint was obtained with poor spatial resolution. In this case, the proper spraying time enabled a well-defined MS image in fingerprint analysis. Based on the above results, the optimal parameters of dispersing concentration (5 mg/mL) and spraying time (30 s) of IVONSs:Sm suspension for the matrix deposition could be settled.
In order to investigate the feasibility of the suggested IVONSs:Sm-assisted LDI-MS method for fingerprint analysis, the fingerprints left on different material surfaces were analyzed via IVONSs:Sm-assisted LDI-MS imaging in negative-ion mode. As shown in Fig. 4 d, the fingerprint morphological features could be clearly perceived, and the MS images could also provide the spatial distribution of characteristic LMW compounds detected at m/z 145.1 (Lys), 175.0 (AA), 255.2 (HA), and 281.2 (OA). Importantly, there was no definite difference among these three material surfaces in imaging quality. These results prove the capability of IVONSs:Sm-assisted LDI-MS imaging to allow molecular-level fingerprint recognition on different material surfaces. Figure S20 demonstrated that the imaging effect using IVONSs:Sm compared favorably with those of graphene and CeO 2 . Compared to graphene and CeO 2 (Table S4 ), the highest signal intensity and the largest number of detectable endogenous LMW compounds in fingerprints were obtained using IVONSs:Sm (Figure S20 a). As seen in Figure S20 b, fingerprint morphological features and spatial distribution of two representative LMW compounds (OA and Lys) could be clearly visualized, while relatively lower MS intensities of LMW compounds from the nano-matrix graphene affected the applied performance of MS imaging for the fingerprint (Figure S20 c). Moreover, a lower contrast between ridges and valleys of the fingerprint was observed from the MS images using the nano-matrix CeO 2 (Figure S20 d). This phenomenon might be mainly ascribed to the low hydrophobicity of the nano-matrix CeO 2 (Figure S21 ), which led to an uneven dispersion of CeO 2 in a weak-polar suspending medium and hence inhomogeneous coverage spotted on the fingerprint. In addition, more hydrophilic CeO 2 could also reduce the performance of LDI-MS imaging via attenuated interaction between the nano-matrix and hydrophobic LMW compounds (e.g., fatty acids). All of this suggests that IVONSs:Sm with appropriate hydrophobicity and high LDI capability could surpass nano-matrices graphene and CeO 2 in the imaging quality of LMW compounds in fingerprints. Moreover, given the analysis of exogenous compounds on the fingerprint, such as individual skincare products or drug residues, is of great value to reconstruct a lifestyle profile of the fingerprint donor, the exogenous fingerprints were prepared after using a liquid soap or acne cream, then analyzed via IVONSs:Sm-assisted LDI-MS imaging. Figure 4 e exhibited a negative-ion mode MS spectrum and the derived MS images of the two representative compounds, namely HA (endogenous) and sodium dodecyl benzene sulfonate (SDBS, exogenous). SDBS is a widely used surfactant in personal care products, and the related [M − Na] − ion of SDBS was detected at m/z 325.2. The existence and spatial distribution of this exogenous ingredient could be verified by the MS image, which overlapped well with the fingerprint pattern derived from the negative-ion at m/z 255.2 of HA. Similarly, fingerprint chemical images of HA and the key pharmaceutical component (retinoic acid, RA) in acne cream were extracted from the MS spectrum to display the fingerprint pattern. Figure 4 f confirmed that the fingerprint ridge details could be observed using the intensity map of [M − H] − ion of HA (m/z 255.2) or RA (m/z 299.2). To further evaluate the detectability of exogenous LMW compounds on the fingerprint, fingerprint samples with different content levels of SDBS, in which SDBS was selected as model compounds of exogenous analyte, were collected after washing hands with the diluted lotions of gradient dilution ratios (Figure S22 a). It was found that SDBS could be detected with a signal-to-noise ratio (S/N) of 5.1 even at a dilution ratio of 1/10. Besides, the LODs of SDBS and RA were calculated to be 3.6 ng and 12.9 ng, respectively. Consequently, the IVONSs:Sm-assisted LDI-MS method demonstrated gratifying sensitivity and great prospects in monitoring exogenous compounds on the fingerprint, especially in the field of forensic science. This capability to sensitively determine fingerprint morphology and chemical information would greatly make the fingerprint spotted at the scene of a crime valuable, particularly for those fingerprints that were not included in the existing fingerprint database.
On the other side, we observed that the age of the fingerprint was another evident piece of information to narrow down the persons of interest. Stimulated by the good performance shown above, an attempt was undertaken to assess the age of the fingerprint using the IVONSs:Sm-assisted LDI-MS tool. For the sake of accuracy, all the fingerprint samples were stored at 30 °C and 60% relative humidity (RH). Figure 4 g presented MS spectra of the representative fresh and aged fingerprints in the negative-ion mode. The unsaturated OA was found to undergo peroxidization reaction, resulting in a signal decrease at m/z 281.2, while the MS signals of saturated HA in fingerprints were relatively stable over time. At the same time, two additional peaks related to the degradation of OA appeared with prolonged fingerprint age, which were identified as oleic acid hydroperoxide (OAHP, m/z 314.2) and 9-oxo-nonanoic acid (ONA, m/z 172.1). The peroxidization mechanism of OA based on the former studies was given in Fig. 4 h [ 43 ]. Namely, OA undergoes a radical-mediated autoxidation in ambient air, forming isomerization products of OAHP, secondary oxidation products like ONA and nonanal are derived from the cleavage of OAHP. Likewise, the decrease of the oxidizable AA and the new peak emerged at m/z 136.0, which was assigned to threonic acid (TA), confirmed the oxidative degradation pathway of AA. The widely accepted mechanism of AA oxidation suggested the generation of a series of active intermediates (e.g. dehydroascorbic acid (DHA), diketogulonic acid (DKG)) and threonic acid (TA) [ 42 , 44 ]. That is, DHA, an oxidized form of AA, is then hydrolyzed to DKG, the intermediate DKG is further decomposed to TA after the rearrangement reaction (Fig. 4 i). The corresponding MS images were compelling support for these trends of OA and AA over time. The MS image of HA spatial pattern remained clearly visible and practically invariable on the fingerprint ridges over time (Fig. 5 a). In contrast, the fingerprint patterns gradually faded for OA and AA from day 0 to day 60. The MS image of TA was absent during the initial period of fingerprint aging but became much clearer with time going on. These results gave a visual demonstration of the time-dependent changes of the representative LMW compounds in fingerprints. Importantly, quantitative analysis of the changes in the LMW compounds implied the possibility of tracking the age of fingerprints. To reduce systematic error, the MS signals were normalized by calculating the intensity ratios (R OA , R AA , and R TA ) using the following equations: R OA = I OA /I HA , R AA = I AA /I HA , R TA = I TA /I HA , where I OA , I AA , and I TA represented the MS signal intensities of OA at m/z 255.2, AA at m/z 175.1, and TA at m/z 136.0, respectively, and IHA stood for the HA intensity at m/z 281.2 with a non-significant change. Hence, Fig. 5 bd illustrate the curves of R OA , R AA , and R TA over a two-month period of fingerprint aging, respectively. The decrease of R OA (R AA ) and the increase of R TA were consistent with the differences in the MS images in Fig. 5 a. Similarly, Figure S22 b ~ d provided a consistent trend of the calculated MS intensity ratios (R OA , R AA , and R TA ) over time compared to Fig. 5 b ~ d. The insignificant difference among all the groups suggested a tiny influence of ambient fluctuations on the curves of R OA , R AA , and R TA , confirming the validity of the means of establishing fingerprint age. Moreover, for a simulation experiment for fingerprint age determination, ten simulated fingerprint specimens with different aging times were analyzed via the IVONSs:Sm-assisted LDI-MS approach. Semi-quantitative evaluation of fingerprint age was performed based on the time-dependent MS intensity ratios (R OA , R AA , and R TA ). Inference results for fingerprint aging time were derived by adopting the average aging time of the three analytes. Comparisons between the true and predicted results were visualized by a heat map, and the color of the squares denoted the gradation of fingerprint age (Figure S23 a). Verification with low prediction error proved the practicality of the IVONSs:Sm-assisted LDI-MS tool for determining the aging time of unknown fingerprint samples.
We subsequently sought to extend the application of the IVONSs:Sm-assisted LDI-MS method to monitor biomarkers excreted from finger sweat glands. It was found that elevated levels of free bilirubin (bilirubin glucuronide, BilG) in the biofluids is related to various hepatopathies [ 45 ]. Hence, rapidly monitoring the level of BilG in a label-free manner is of great importance for early diagnosis (Figure S23 b). With its reliable, highly sensitive, and label-free features, IVONSs:Sm-assisted LDI-MS is particularly well-suited for trace BilG detection. The typical MS spectrum of the sweat fingerprint sample from acute hepatitis patients displayed the [M − H] − peak of BilG at m/z 759.3, which was putatively identified via MALDI LIFT-TOF/TOF MS (Fig. 5 e, f). Additionally, the MS images exhibited a sweat pore-centered spatial distribution of BilG in the fingerprint ridge area (inset, Fig. 5 e). For the sweat fingerprint sample from a healthy donor, a MS spectrum without a BilG signal was observed (Figure S23 c). To further evaluate the feasibility of the proposed IVONSs:Sm-assisted LDI-MS tool in medical analysis, BilG levels in fingerprints from 10 healthy donors and 6 hepatitis patients were analyzed. Figure S24 enumerated the negative-ion mode MS spectra of fingerprints from the hepatitis patient group, and higher MS intensity at m/z 759.3 than the healthy group pointed out the increase of BilG concentration in the sweat of hepatitis patients (Fig. 5 g and Figure S25 ). By using the MS signal of HA (m/z 255.2) as an internal standard, this discrepancy could be more clearly discriminated via a ratiometric parameter (MI 759.3 /MI 255.2 ), where MI 759.3 and MI 255.2 represented the MS signal intensity at m/z 759.3 and 255.2, respectively (Fig. 5 h). Furthermore, the serum total bilirubin (TBil) level of the 16 participants, which was widely used in serodiagnosis, was determined by the clinically bilirubin oxidase method. Figure 5 i showed a good consistency between the parameter MI 759.3 /MI 255.2 and TBil concentration in identifying hepatitis patients, which was further corroborated by Pearson correlation analysis (r = 0.872) in Fig. 5 j. Due to the superiorities of the label-free manner, non-invasive sampling, and quick response, the proposed method possesses outstanding advantages over many bilirubin assay techniques [ 46 , 47 ]. | Results and discussion
Characterization and LDI-MS performance of IVONS:Sm
As depicted in Fig. 1 a, the preparation of IVONSs:Sm was carried out using the microemulsion-mediated solvothermal method. Transmission electron microscopy (TEM) analysis revealed that the as-synthesized IVONSs:Sm were of translucence and corrugation, indicative of a sheet-like nanostructure which was conducive to the adsorption of LMW molecules (Fig. 1 b). A lattice spacing of 0.270 nm in high resolution TEM (HRTEM) image was indexed to (112) plane of orthorhombic InVO 4 (Fig. 1 c). Additionally, the selected area electron diffraction (SAED) pattern confirmed the high degree of crystallinity of IVONSs:Sm (inset, Fig. 1 c). Elemental analysis using energy dispersive spectrum (EDS) and EDS mapping further verified the presence and uniform distribution of In, V, O, and Sm elements in IVONSs:Sm, with semiquantitative results closely aligning with the theoretical compositions (Figure S1 a, b). Figure 1 d exhibited the powder X-ray diffraction (XRD) patterns of IVONSs:Sm, the sharp diffraction peaks also revealed the good crystallinity of IVONSs:Sm and matched well with the standard orthorhombic-phase InVO 4 (JCPDS No. 48–0898). It is noteworthy that Sm doping had no effect on the characteristic diffraction pattern of InVO 4 , and there was no impurity peak concerned with In 2 O 3 , V 2 O 5 or other species. Beyond that, survey X-ray photoelectron spectroscopy (XPS) analysis confirmed the elements of In, V, O, and Sm without impurities (Fig. 1 e). Additionally, high-resolution XPS spectra of In 3d, V 2p, O 1s, and Sm 3d provided detailed insights into the chemical composition and bonding states of the IVONSs:Sm (Fig. 1 f ~ i). The high-resolved In 3d spectrum was characterized by an In 3d 5/2 peak at 444.3 eV and an In 3d 3/2 peak at 452.1 eV (Fig. 1 f). For the V 2p spectrum, the fitted peaks at 516.9 eV and 525.0 eV were associated with V 2p 3/2 and V 2p 1/2 of V 5+ , respectively. While peaks located at 515.2 eV and 523.3 eV were assigned to V 2p orbitals of V 4+ , indicating V 5+ might obtain electrons from nearby oxygen vacancies (Fig. 1 g). As expected, O 1s spectrum revealed three deconvoluted peaks at 530.3, 532.1 and 533.6 eV, representing lattice oxygen (O L ), vacancy oxygen (O V ) and chemisorbed oxygen (O OH ), respectively (Fig. 1 h). In the components of Sm 3d peak, two peaks fitted at 1083.5 eV and 1110.9 eV were corresponded to Sm 3d 5/2 and Sm 3d 3/2 orbitals, respectively (Fig. 1 i). Collectively, these characterization results provided evidence of the successful synthesis of IVONSs:Sm.
In order to evaluate the performance of IVONSs:Sm as a nano-matrix for LDI-MS analysis, the four most common fatty acids in organisms - lauric acid (LA), myristic acid (MA), palmitoleic acid (PA), and oleic acid (OA) - were selected as model analytes and analyzed with IVONSs:Sm, as well as traditional organic matrices (CHCA and 9-AA) in both positive- and negative-ion modes. When CHCA was utilized in the positive-ion mode, the cationic adducts and fragments of CHCA were found to be predominant, inhibiting the MS signals of the four analyte ions, including [M + H] + and [M + Na] + , under these background signals (Figure S2 a, Table S2 ). Additionally, Figure S2 b also revealed a suppressive MS signal of the four fatty acids, indicating the inapplicability of 9-AA in positive-ion MALDI-MS analysis. As shown in Figure S2 c, multiple positive-ion signals of fatty acids could be detected with less interference peak when IVONSs:Sm was used as a nano-matrix. However, it is noteworthy that accompanying the quasi-molecular ion peaks of fatty acids were the alkali and double alkali adduct ions of the analytes (Table S2 ), rendering the mass spectrum in positive-ion mode particularly complicated to interpret. These results imply that it may not be a good option to analyze fatty acids in the positive-ion mode. On the other hand, Fig. 2 a and b demonstrated the MS spectra of the four fatty acids in negative-ion mode using CHCA and 9-AA as matrices. For CHCA, the matrix-related ions ([M − CO 2 − H] − at m/z 144.0 and [M − H] − at m/z 188.1) dominated, but suppressive analyte signals were observed. In addition, improved MS signal intensity and lower missing percentage of fatty acids were observed for the 9-AA matrix (Table S2 ). Nevertheless, a higher intensity of intrinsic matrix-related ion ([M-H] − at m/z 193.1) still suppressed the MS signal of fatty acids. In contrast, an interference-free MS spectrum with deprotonated [M − H] − ions of LA, MA, PA, and OA at m/z 199.2, 227.2, 253.2, and 281.2 was obtained in the range of m/z 150–600 with IVONSs:Sm as the nano-matrix (Fig. 2 c), implying that IVONSs:Sm were more applicable in facilitating negative ionization for fatty acids. As a control, the performance of the three matrices without analytes was also evaluated (Figure S3 ). The results indicated that backgrounds with several intrinsic matrix-related peaks were observed for those using CHCA and 9-AA as matrices in different ionization modes, while less background interference for IVONSs:Sm ranged from m/z 150 to m/z 600, especially in negative-ion mode, further demonstrating the potential of IVONSs:Sm as a stable nano-matrix for LDI-MS analysis.
Furthermore, the study evaluated the performance of LDI-MS using IVONSs:Sm in contrast to previously reported nano-matrices such as graphene and metal oxide (e.g., cerium oxide (CeO 2 )), which have been recognized for their superiority in SALDI-MS analysis of LWM compounds (Figure S4 a, b) [ 34 , 37 ]. Figure 2 c, d revealed that MS signals of the four fatty acids could be distinctly detectable as [M − H] − using the three nano-matrices. However, the higher analyte signals for IVONSs:Sm suggested its superiority over graphene and CeO 2 in detection sensitivity (Fig. 2 e), which could be ascribed to a synergistic effect of stronger UV absorption, higher photothermal capability of IVONSs:Sm, and potentially effective interactions between IVONSs:Sm and surface LMW compounds. As shown in Figure S4 c, UV-vis absorption spectra of the three nano-matrices with the same dispersing concentration were investigated. A stronger absorption of IVONSs:Sm than other nano-matrices at the operational wavelength (355 nm) made it possible to be utilized as a benign receptor of laser energy in the LDI-MS process. Additionally, the infrared thermographic images and photothermal heating curves of the three nano-matrices indicated that IVONSs:Sm had more advantageous photothermal conversion compared to the other two conventional nano-matrices, which made it effective to transfer the absorbed laser energy and enhance the ionization of analytes (Figure S5 ). With this in mind, it could be an optimal choice to analyze LMW compounds with IVONSs:Sm as a nano-matrix. Furthermore, Fig. 2 e and Figure S6 suggested that the incorporation of Sm 3+ into IVONSs could be synergistically contributive to the negative ionization process. Significantly, the MS spectrum with the same characteristic peaks of the four fatty acids but insufficient MS intensities was obtained by using IVONSs without samarium doping. This outcome demonstrated the superior feasibility of IVONSs:Sm over pure IVONSs in the analysis of fatty acids. Accordingly, the optimal dopant amount was investigated, and Fig. 2 f indicated that the MS signal intensities of the four fatty acids reached their maximum at 5.0 mol % samarium, suggesting an optimal samarium molar ratio for IVONSs:Sm.
Mechanistic basis of IVONSs:Sm-assisted LDI-MS
While the precise LDI mechanism remains to be conclusively established, numerous studies have provided evidence supporting the photoexcitation and electronic transition mechanism induced by laser in the ionization process [ 38 ]. Hence, the enhancement of MS signals with IVONSs:Sm could also be elucidated by the electronic transition mechanism for the negative ionization of LMW molecules. As depicted in Fig. 2 g (inset), the input laser energy could be transduced by generation of electron-hole pairs in IVONSs:Sm, where the excited electrons could be emitted from IVONSs:Sm and subsequently interact with LMW molecules, facilitating the negative ionization of the analyte [M − H] − . To validate this mechanistic basis, potassium thiocyanate was employed as a hole-scavenger to investigate the correlation between the generation of electron-hole pairs and the LDI process of LMW molecules [ 38 ]. Figure 2 g demonstrated decreasing deprotonated ion peaks of of LA (m/z 199.2) and OA (m/z 281.2) with an increase in KSCN concentration (0, 5, 10, 20 μmol/mL), showing a distinct suppression of the MS signal of the two fatty acids in the presence of higher KSCN concentration, thus experimentally supporting the electronic transition mechanism in Fig. 2 g [ 39 ]. Consequently, the electronic structure and optoelectronic properties of IVONSs:Sm should be examined. Figure 2 h, i and Figure S7 presented the electron density analysis of IVONSs and IVONSs:Sm on the crystal face (100). In comparison to pure IVONSs (Figure S7 ), the distinct electron cloud overlapping in IVONSs:Sm confirmed the coexistence state of electrovalent bond and covalent bond in samarium-oxygen polarity linkages (Fig. 2 h). Accordingly, the differential electron density diagram indicated that the electron density of O 2- adjacent to Sm 3+ increased (Fig. 2 i). Thus, the octahedron formed by Sm 3+ -O 2- bond caused the enhancement of the electrical dipole moment in the lattice, which was beneficial for the diffusion of the charge carrier. On the other hand, Figure S8 a depicted the diffuse reflectance ultraviolet-visible spectra of IVONSs:Sm and pure IVONSs. Notably, the strong absorption bands ranging from 200 to 400 nm were mainly attributed to the charge transfer effect between V 5+ –O 2- and f-f transitions of Sm 3+ (Figure S8 b) [ 40 , 41 ], implying that IVONSs:Sm predominantly absorbed laser power (e.g., 355 nm Nd:YAG laser) in the MALDI-MS instrument, thus allowing for the subsequent ionization process. According to the Tauc equation, the corresponding band gaps of IVONSs:Sm and pure IVONSs based on the optical absorption were calculated to be 2.19 and 2.28 eV, respectively (Fig. 3 a). The lower band gap of IVONSs:Sm than pure IVONSs was further supported by density functional theory (DFT) simulation (Fig. 3 b, c), where the DFT-derived band gap of pure IVONSs aligned with the experimental data. For IVONSs:Sm, the direct band gap of IVONSs:Sm was slightly smaller than the experimental value, as DFT simulation tends to underestimate the band gap [ 42 ]. The density of states (DOS) results of IVONSs indicated that the conduction band was predominantly composed of V 3d states (Figure S9 a), while the V 3d states hybridized with the Sm 4f states, forming the conduction band of IVONSs:Sm (Fig. 3 d). This suggested that the Sm 4f states would shift the conduction band towards the lower energy, thus reducing the band gap for IVONSs:Sm. The experimental and calculation results consistently demonstrated the decrease in the IVONSs band gap with Sm ion dopant. Importantly, the lower band gap of IVONSs:Sm promoted the electronic transition and charge carrier generation, which was conducive to the negative ionization of analytes as per the mechanism illustrated above.
In addition, Fig. 3 e ~ h depicted atomic force microscope (AFM) and surface photovoltage (SPV) images of the two vanadate nanosheets, providing a spatial visualization of the generation and transport of charge carriers at the nanoscale. The AFM images of IVONSs:Sm and pure IVONSs, with similar lateral dimensions and thickness of approximately 5 nm, were presented in Fig. 3 e and g. However, the corresponding SPV images, obtained by subtracting the potentials under dark conditions, revealed that IVONSs:Sm exhibited a more negative light-induced potential change of approximately 30 mV compared to pure IVONSs. This indicated a higher accumulation of electrons and greater mobility of charge carriers from the bulk phase to the surface of IVONSs:Sm, which facilitated energy transfer from the nanomaterial to LMW molecules, consequently increasing the efficiency of negative ionization in LDI-MS. To gain insight into the role of the nanomaterial IVONSs:Sm in the LDI process, we further investigated the thermodynamic profile of adsorption and dissociation of analytes on the two vanadate nanosheets using DFT simulations. As a representative LMW molecule, the electrostatic potential (ESP) of OA was initially calculated based on the ground state electron density (Figure S9 b). The carboxyl group was found to be favorable for OA (RCOOH) to be adsorbed onto the vanadate nanosheets (RCOOH*) due to its maximum ESP. Consequently, the acidic hydrogen in the carboxyl terminal tended to transfer from OA to the surface lattice oxygen, liberating the negative ion of OA (RCOO − ). Figure 3 i demonstrated that the Gibbs free energies of each step were reduced on IVONSs:Sm relative to pure IVONSs, indicating that the LDI process on the surfaces of IVONSs:Sm was thermodynamically favorable. Taking all of the above into consideration, the improved performance of IVONSs:Sm in nanomaterial-assisted LDI could be elucidated as a synergistic effect of various factors, including optical absorption and charge carrier mobility.
IVONSs:Sm-assisted LDI-MS analysis of fatty acids
Due to its superior performance, IVONSs:Sm was determined to be a highly effective nano-matrix for in situ detection of LMW compounds in authentic samples, such as fingerprints. A lifting process using double-sided conductive copper foil tape was performed to retrieve fingerprints from surfaces. The exceptional ability of IVONSs:Sm in negatively ionizing LMW compounds on the copper conductive tape was also verified (Figure S10 ). To assess the reproducibility of IVONSs:Sm-assisted LDI-MS in negative-ion mode, the mass spectrometry signal intensities of representative LA (saturated fatty acid) and OA (unsaturated fatty acid) from fifteen randomly selected positions within a single spot (diameter ~ 2.1 mm) of analytes spotted on the copper conductive tape were collected. As shown in Fig. 3 j, relatively stable MS signals for LA and OA were observed, and a low relative standard deviation for each acquisition confirmed the high shot-to-shot reproducibility of the IVONSs:Sm-assisted LDI-MS approach (Fig. 3 k). In comparison, the MS signal intensities for LA and OA showed significant fluctuations with an RSD of ~ 20% for each acquisition (Fig. 3 j), indicating unsatisfactory repeatability when employing the traditional organic matrix 9-AA. The good shot-to-shot reproducibility was mainly attributed to the difference in the homogeneity of IVONSs:Sm and 9-AA deposited on the copper conductive tape. Figure S11 demonstrated the crystallization of IVONSs:Sm and two types of traditional organic matrices (CHCA and 9-AA) obtained in a MALDI-MS spectrometer, in which a stark difference of matrix distribution between IVONSs:Sm and other two organic matrices could be witnessed. It could be seen that IVONSs:Sm spread more uniformly after the solvent evaporation, instead of exhibiting the “sweet spot” effect of CHCA and 9-AA on the target surface. The good shot-to-shot reproducibility effectively solved the variability of signal intensity, providing assurance of reliable quantitative analysis of analytes using IVONSs:Sm. Fig. 3 L demonstrated that the MS peaks of the two deprotonated fatty acids [M − H] − gradually raised with the increasing analyte concentrations. The MS responses for LA at m/z 199.2 and OA at m/z 281.2 were proportional to the concentrations of the target analytes (Figure S13 a), and the detection limit (LOD) investigations of LA and OA corroborated the sensitivity of the IVONSs:Sm-assisted LDI-MS approach (8.2 μM for LA and 11.6 μM for OA). As shown in Figure S12 , a quantitative analysis of residual LA and OA in real fingerprint samples was further performed using high-performance liquid chromatography-electrospray ionization mass spectrometry (HPLC-ESI-MS). The residual quantities of LA and OA in a fingerprint sample were determined to be 0.36 and 2.72 μg, respectively. Taking the LODs and added volume (1 μL) of fatty acid standard solutions into account, the proposed IVONSs:Sm-assisted LDI-MS approach was capable of detecting 1.6 × 10 − 3 μg of LA and 3.3 × 10 − 3 μg of OA in real samples, which were substantially lower than those detected by HPLC-ESI-MS. Therefore, the quantitative results supported that IVONSs:Sm-assisted LDI-MS could satisfy the analytical demand of fatty acid levels in fingerprints. Given that a high salt concentration might suppress the ionization process of the analyte, the salt tolerance was evaluated by adding different concentrations of NaCl (0 ~ 500 mM) in IVONSs:Sm-assisted LDI-MS detection of two representative fatty acids. Figure 3 m showed that the addition of 10–500 mM NaCl slightly decreased the signal intensities of LA and OA, confirming a good tolerance of the nano-matrix IVONSs:Sm. Additionally, it was noted that the characteristic MS peaks of the LA and OA mixture using IVONSs:Sm with a two-month storage matched those using freshly prepared IVONSs:Sm (Fig. 3 n), and there was insignificant divergence in the pattern and intensity of the MS peaks after multiple laser shots (Figure S13 b, c). Hence, the stability of IVONSs:Sm, including the photostability of analytes during IVONSs:Sm-assisted LDI-MS analysis, could be experimentally ascertained.
IVONSs:Sm-assisted LDI-MS imaging of fingerprints
Based on the aforementioned study, the applicability of IVONSs:Sm for detecting LMW compounds adhered to fingerprints was preliminarily investigated. IVONSs:Sm were dispersed in bromoethane with a low boiling point and then sprayed on the fingerprint section through an atomizer. Following the deposition of IVONSs:Sm, the fingerprint was lifted using copper conductive tape. Scanning electron microscopy (SEM) images and results from EDS analysis confirmed the concentration of IVONSs:Sm on the ridges of the fingerprint after the disperse medium evaporation (Fig. 4 a, b). To collect IVONSs:Sm-assisted LDI-MS spectra of the fingerprints, the copper conductive tape containing the extracted fingerprint was affixed to an indium tin oxide-coated glass slide for IVONSs:Sm-assisted LDI-MS analysis. Figure S14 a presented the positive MS profile obtained from the extracted fingerprint, which showed the MS spectrum of an endogenous mixture containing PA, hexadecanoic acid (HA), OA, and stearic acid (SA). The abundance of MS peaks confirmed that the chemical information of the fingerprint has been transferred onto the substrate of the copper conductive tape. Apart from the fatty acids, other ion peaks originated from other endogenous compounds were also recorded in the range m/z 100–700. For example, the peaks detected at m/z 147.1 and 177.1 were assigned to the [M + H] + ion of lysine (Lys) and ascorbic acid (AA), respectively. In contrast, the MS spectrum of the extracted fingerprint obtained in negative-ion mode exhibited a relatively concise signal with long-term stability in high vacuum (Fig. 4 c and Figure S14 b ~ d). Unlike the multiple peaks for one analyte under positive-ion mode, all the fatty acids were clearly detected as the only deprotonated [M − H] − ions with the assistance of IVONSs:Sm. Additionally, intense ions in the fingerprint were putatively identified via MALDI LIFT-TOF/TOF MS (Figure S15 , S16 , and Table S3 ). Furthermore, considering the coexistent lipids in fingerprints, as well as the potential ester bond cleavage in the case of IVONSs:Sm, a controlled experiment was performed and indicated a relatively stable MS signal of two model fatty acids in the existence of two typical lipids (glycerol trimyristate and glycerol tripalmitate), which excluded the interference of lipids on subsequent fatty acid analysis (Figure S17 ). All the results suggested the preponderance and feasibility of negative-ion IVONSs:Sm-assisted LDI-MS for spatial molecular profiling in fingerprints.
Since IVONSs:Sm allowed for the MS analysis of chemical species contained within the fingerprint, IVONSs:Sm-assisted LDI-MS imaging of the fingerprint could be generated from the spatial distribution of the MS signals of selected deprotonated analytes. Considering the spraying conditions of IVONSs:Sm could affect the loading amount and homogeneous coverage of the nano-matrix on the fingerprint, critical experimental parameters (including dispersing concentration of IVONSs:Sm and spraying time) were optimized. For the sake of simplicity, HA was taken as the representative fatty acid for IVONSs:Sm-assisted LDI-MS analysis in negative-ion mode. As shown in Figure S18 a, the vanadium content on the substrate of the copper conductive tape was determined by inductively coupled plasma mass spectrometry (ICP-MS), which implied that the loading amount of IVONSs:Sm on the fingerprint gradually increased with the increase of IVONSs:Sm concentration (Figure S18 b). As a consequence, the MS signal of HA at m/z 255.2 went up with the dispersing concentration of IVONSs:Sm. Likewise, the spraying time was optimized, and the results were presented in Figure S18 c. The incremental vanadium content and the MS signal with the spraying time of IVONSs:Sm suspension further confirmed that adequate IVONSs:Sm deposition was beneficial to the enhancement of the MS signal. Aside from the evaluation of spraying conditions in light of MS signal intensity of the analyte, the performance of IVONSs:Sm-assisted LDI-MS imaging could also depend on parameters like the dispersing concentration of IVONSs:Sm and spraying time. It could be found that the MS images with higher image clarity and contrast were obtained with the increasing concentration of IVONSs:Sm (Figure S19 a ~ c). Figure S19 d ~ f indicated that the MS images of the fingerprint gradually blurred when the spraying time increased to 60 s, suggesting that the prolonged deposition procedure of IVONSs:Sm resulted in an undesirable MS image without the original minutiae of the fingerprint pattern. This phenomenon might be ascribed to the factor that extended and continuous exposure of sprayed bromoethane droplets could thoroughly wet the fingerprint area, causing the delocalization and diffusion of hydrophobic fatty acids, thus the corresponding MS image of the fingerprint was obtained with poor spatial resolution. In this case, the proper spraying time enabled a well-defined MS image in fingerprint analysis. Based on the above results, the optimal parameters of dispersing concentration (5 mg/mL) and spraying time (30 s) of IVONSs:Sm suspension for the matrix deposition could be settled.
In order to investigate the feasibility of the suggested IVONSs:Sm-assisted LDI-MS method for fingerprint analysis, the fingerprints left on different material surfaces were analyzed via IVONSs:Sm-assisted LDI-MS imaging in negative-ion mode. As shown in Fig. 4 d, the fingerprint morphological features could be clearly perceived, and the MS images could also provide the spatial distribution of characteristic LMW compounds detected at m/z 145.1 (Lys), 175.0 (AA), 255.2 (HA), and 281.2 (OA). Importantly, there was no definite difference among these three material surfaces in imaging quality. These results prove the capability of IVONSs:Sm-assisted LDI-MS imaging to allow molecular-level fingerprint recognition on different material surfaces. Figure S20 demonstrated that the imaging effect using IVONSs:Sm compared favorably with those of graphene and CeO 2 . Compared to graphene and CeO 2 (Table S4 ), the highest signal intensity and the largest number of detectable endogenous LMW compounds in fingerprints were obtained using IVONSs:Sm (Figure S20 a). As seen in Figure S20 b, fingerprint morphological features and spatial distribution of two representative LMW compounds (OA and Lys) could be clearly visualized, while relatively lower MS intensities of LMW compounds from the nano-matrix graphene affected the applied performance of MS imaging for the fingerprint (Figure S20 c). Moreover, a lower contrast between ridges and valleys of the fingerprint was observed from the MS images using the nano-matrix CeO 2 (Figure S20 d). This phenomenon might be mainly ascribed to the low hydrophobicity of the nano-matrix CeO 2 (Figure S21 ), which led to an uneven dispersion of CeO 2 in a weak-polar suspending medium and hence inhomogeneous coverage spotted on the fingerprint. In addition, more hydrophilic CeO 2 could also reduce the performance of LDI-MS imaging via attenuated interaction between the nano-matrix and hydrophobic LMW compounds (e.g., fatty acids). All of this suggests that IVONSs:Sm with appropriate hydrophobicity and high LDI capability could surpass nano-matrices graphene and CeO 2 in the imaging quality of LMW compounds in fingerprints. Moreover, given the analysis of exogenous compounds on the fingerprint, such as individual skincare products or drug residues, is of great value to reconstruct a lifestyle profile of the fingerprint donor, the exogenous fingerprints were prepared after using a liquid soap or acne cream, then analyzed via IVONSs:Sm-assisted LDI-MS imaging. Figure 4 e exhibited a negative-ion mode MS spectrum and the derived MS images of the two representative compounds, namely HA (endogenous) and sodium dodecyl benzene sulfonate (SDBS, exogenous). SDBS is a widely used surfactant in personal care products, and the related [M − Na] − ion of SDBS was detected at m/z 325.2. The existence and spatial distribution of this exogenous ingredient could be verified by the MS image, which overlapped well with the fingerprint pattern derived from the negative-ion at m/z 255.2 of HA. Similarly, fingerprint chemical images of HA and the key pharmaceutical component (retinoic acid, RA) in acne cream were extracted from the MS spectrum to display the fingerprint pattern. Figure 4 f confirmed that the fingerprint ridge details could be observed using the intensity map of [M − H] − ion of HA (m/z 255.2) or RA (m/z 299.2). To further evaluate the detectability of exogenous LMW compounds on the fingerprint, fingerprint samples with different content levels of SDBS, in which SDBS was selected as model compounds of exogenous analyte, were collected after washing hands with the diluted lotions of gradient dilution ratios (Figure S22 a). It was found that SDBS could be detected with a signal-to-noise ratio (S/N) of 5.1 even at a dilution ratio of 1/10. Besides, the LODs of SDBS and RA were calculated to be 3.6 ng and 12.9 ng, respectively. Consequently, the IVONSs:Sm-assisted LDI-MS method demonstrated gratifying sensitivity and great prospects in monitoring exogenous compounds on the fingerprint, especially in the field of forensic science. This capability to sensitively determine fingerprint morphology and chemical information would greatly make the fingerprint spotted at the scene of a crime valuable, particularly for those fingerprints that were not included in the existing fingerprint database.
On the other side, we observed that the age of the fingerprint was another evident piece of information to narrow down the persons of interest. Stimulated by the good performance shown above, an attempt was undertaken to assess the age of the fingerprint using the IVONSs:Sm-assisted LDI-MS tool. For the sake of accuracy, all the fingerprint samples were stored at 30 °C and 60% relative humidity (RH). Figure 4 g presented MS spectra of the representative fresh and aged fingerprints in the negative-ion mode. The unsaturated OA was found to undergo peroxidization reaction, resulting in a signal decrease at m/z 281.2, while the MS signals of saturated HA in fingerprints were relatively stable over time. At the same time, two additional peaks related to the degradation of OA appeared with prolonged fingerprint age, which were identified as oleic acid hydroperoxide (OAHP, m/z 314.2) and 9-oxo-nonanoic acid (ONA, m/z 172.1). The peroxidization mechanism of OA based on the former studies was given in Fig. 4 h [ 43 ]. Namely, OA undergoes a radical-mediated autoxidation in ambient air, forming isomerization products of OAHP, secondary oxidation products like ONA and nonanal are derived from the cleavage of OAHP. Likewise, the decrease of the oxidizable AA and the new peak emerged at m/z 136.0, which was assigned to threonic acid (TA), confirmed the oxidative degradation pathway of AA. The widely accepted mechanism of AA oxidation suggested the generation of a series of active intermediates (e.g. dehydroascorbic acid (DHA), diketogulonic acid (DKG)) and threonic acid (TA) [ 42 , 44 ]. That is, DHA, an oxidized form of AA, is then hydrolyzed to DKG, the intermediate DKG is further decomposed to TA after the rearrangement reaction (Fig. 4 i). The corresponding MS images were compelling support for these trends of OA and AA over time. The MS image of HA spatial pattern remained clearly visible and practically invariable on the fingerprint ridges over time (Fig. 5 a). In contrast, the fingerprint patterns gradually faded for OA and AA from day 0 to day 60. The MS image of TA was absent during the initial period of fingerprint aging but became much clearer with time going on. These results gave a visual demonstration of the time-dependent changes of the representative LMW compounds in fingerprints. Importantly, quantitative analysis of the changes in the LMW compounds implied the possibility of tracking the age of fingerprints. To reduce systematic error, the MS signals were normalized by calculating the intensity ratios (R OA , R AA , and R TA ) using the following equations: R OA = I OA /I HA , R AA = I AA /I HA , R TA = I TA /I HA , where I OA , I AA , and I TA represented the MS signal intensities of OA at m/z 255.2, AA at m/z 175.1, and TA at m/z 136.0, respectively, and IHA stood for the HA intensity at m/z 281.2 with a non-significant change. Hence, Fig. 5 bd illustrate the curves of R OA , R AA , and R TA over a two-month period of fingerprint aging, respectively. The decrease of R OA (R AA ) and the increase of R TA were consistent with the differences in the MS images in Fig. 5 a. Similarly, Figure S22 b ~ d provided a consistent trend of the calculated MS intensity ratios (R OA , R AA , and R TA ) over time compared to Fig. 5 b ~ d. The insignificant difference among all the groups suggested a tiny influence of ambient fluctuations on the curves of R OA , R AA , and R TA , confirming the validity of the means of establishing fingerprint age. Moreover, for a simulation experiment for fingerprint age determination, ten simulated fingerprint specimens with different aging times were analyzed via the IVONSs:Sm-assisted LDI-MS approach. Semi-quantitative evaluation of fingerprint age was performed based on the time-dependent MS intensity ratios (R OA , R AA , and R TA ). Inference results for fingerprint aging time were derived by adopting the average aging time of the three analytes. Comparisons between the true and predicted results were visualized by a heat map, and the color of the squares denoted the gradation of fingerprint age (Figure S23 a). Verification with low prediction error proved the practicality of the IVONSs:Sm-assisted LDI-MS tool for determining the aging time of unknown fingerprint samples.
We subsequently sought to extend the application of the IVONSs:Sm-assisted LDI-MS method to monitor biomarkers excreted from finger sweat glands. It was found that elevated levels of free bilirubin (bilirubin glucuronide, BilG) in the biofluids is related to various hepatopathies [ 45 ]. Hence, rapidly monitoring the level of BilG in a label-free manner is of great importance for early diagnosis (Figure S23 b). With its reliable, highly sensitive, and label-free features, IVONSs:Sm-assisted LDI-MS is particularly well-suited for trace BilG detection. The typical MS spectrum of the sweat fingerprint sample from acute hepatitis patients displayed the [M − H] − peak of BilG at m/z 759.3, which was putatively identified via MALDI LIFT-TOF/TOF MS (Fig. 5 e, f). Additionally, the MS images exhibited a sweat pore-centered spatial distribution of BilG in the fingerprint ridge area (inset, Fig. 5 e). For the sweat fingerprint sample from a healthy donor, a MS spectrum without a BilG signal was observed (Figure S23 c). To further evaluate the feasibility of the proposed IVONSs:Sm-assisted LDI-MS tool in medical analysis, BilG levels in fingerprints from 10 healthy donors and 6 hepatitis patients were analyzed. Figure S24 enumerated the negative-ion mode MS spectra of fingerprints from the hepatitis patient group, and higher MS intensity at m/z 759.3 than the healthy group pointed out the increase of BilG concentration in the sweat of hepatitis patients (Fig. 5 g and Figure S25 ). By using the MS signal of HA (m/z 255.2) as an internal standard, this discrepancy could be more clearly discriminated via a ratiometric parameter (MI 759.3 /MI 255.2 ), where MI 759.3 and MI 255.2 represented the MS signal intensity at m/z 759.3 and 255.2, respectively (Fig. 5 h). Furthermore, the serum total bilirubin (TBil) level of the 16 participants, which was widely used in serodiagnosis, was determined by the clinically bilirubin oxidase method. Figure 5 i showed a good consistency between the parameter MI 759.3 /MI 255.2 and TBil concentration in identifying hepatitis patients, which was further corroborated by Pearson correlation analysis (r = 0.872) in Fig. 5 j. Due to the superiorities of the label-free manner, non-invasive sampling, and quick response, the proposed method possesses outstanding advantages over many bilirubin assay techniques [ 46 , 47 ]. | Conclusion
In brief, the initial synthesis of IVONSs:Sm nanosheets was accomplished, and subsequently, these nanosheets were incorporated into a negative-ion SALDI-MS system as an appropriate nano-matrix for the detection of LMW compounds. The IVONSs:Sm exhibited distinct advantages in the interpretation and reproducibility of MS spectra when compared to conventional organic matrices. Furthermore, the application of the IVONSs:Sm-assisted LDI-MS method to fingerprint analysis demonstrated a multitude of advantageous characteristics. First, the successful recognition of fingerprint ridge patterns was accomplished by employing a method that minimizes background interference and amplifies signal intensity. Second, a thorough examination of fingerprints, taking into account the amalgamation of nuanced molecular-level variations and disparities throughout time, proved to be sufficient in yielding vital biometric data and forensic proof. Furthermore, the utilization of copper conductive tape facilitated the extraction of fingerprints, hence expanding the method’s potential for various non-conductive materials. Additionally, the label-free and non-invasive characteristics of the IVONSs:Sm-assisted LDI-MS platform render it exceptionally well-suited for the advancement of preventive and diagnostic healthcare applications. Hence, a revolutionary framework for multidimensional fingerprint analysis was introduced in this study, which not only introduced an innovative nano-matrix for negative-ion LDI-MS but also facilitated the monitoring of specific targets in biosamples. | This study presents the first-ever synthesis of samarium-doped indium vanadate nanosheets (IVONSs:Sm) via microemulsion-mediated solvothermal method. The nanosheets were subsequently utilized as a nano-matrix in laser desorption/ionization mass spectrometry (LDI-MS). It was discovered that the as-synthesized IVONSs:Sm possessed the following advantages: improved mass spectrometry signal, minimal matrix-related background, and exceptional stability in negative-ion mode. These qualities overcame the limitations of conventional matrices and enabled the sensitive detection of small biomolecules such as fatty acids. The negative-ion LDI mechanism of IVONSs:Sm was examined through the implementation of density functional theory simulation. Using IVONSs:Sm-assisted LDI-MS, fingerprint recognitions based on morphology and chemical profiles of endogenous/exogenous compounds were also achieved. Notably, crucial characteristics such as the age of an individual’s fingerprints and their physical state could be assessed through the longitudinal monitoring of particular biomolecules (e.g., ascorbic acid, fatty acid) or the specific biomarker bilirubin glucuronide. Critical information pertinent to the identification of an individual would thus be facilitated by the analysis of the compounds underlying the fingerprint patterns.
Graphical Abstract
Supplementary Information
The online version contains supplementary material available at 10.1186/s12951-023-02239-w.
Keywords | Electronic supplementary material
Below is the link to the electronic supplementary material.
| Author Contributions
J.W. conceived and designed the research. Y.Z., P.Z., Y.S., and H.L. performed experiments. J.W. and Y.Z. analyzed the data and wrote the manuscript. C.F., S.L., and P.A. provided suggestions and technical support. All authors reviewed and approved the manuscript.
Funding
This work is financially supported by the Natural Science Foundation of Hunan Province, China (2021JJ40452, 2021JJ80072, 2023JJ40220, 2023JJ60494), Scientific Research Project of Hunan Provincial Health Commission (202112062218), Scientific Research Project of Hunan Provincial Education Department (22B0455), Medical Technology Innovation Guidance Project of Hunan Province Science and Technology Department (2020SK51805), Macao Young Scholars Program (AM2022020), and Doctoral Scientific Research Foundation of University of South China (200XQD042).
Data Availability
Data will be made available on request.
Declarations
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:36:45 | J Nanobiotechnology. 2023 Dec 10; 21:475 | oa_package/74/a7/PMC10710729.tar.gz |
PMC10713752 | 38090265 | Introduction
Occipital neuralgia (ON) is a disabling headache disorder that involves lancinating pain in the distribution of the greater occipital nerves (GON), lesser occipital nerves (LON), and/or third occipital nerves (TON). The GON is most frequently involved and presents with pain that originates in the neck or skull base and radiates superiorly toward the fronto-orbital regions ( 1 ). Less frequently, the LON is involved, often in conjunction with the involvement of the GON. ON involving the LON travels laterally in the occipital scalp and radiates toward the ipsilateral ear and temple ( 2 ). The TON is located medial and more caudally to the GON and innervates the upper neck and lower occipital scalp.
According to the International Classification of Headache Disorders Third Edition (ICHD-3) diagnostic criteria, ON involves pain of shooting, stabbing or sharp in quality, palpation tenderness in the distribution of the involved nerve(s), and improvement of symptoms with nerve blocks (NB) ( 3 ). The diagnostic criteria are listed in Table 1 .
In clinical practice, isolated ON as the sole chief complaint is uncommon, but ON can more commonly be appreciated in patients who have another headache disorder such as migraine ( 4 ). Consequently, ON tends to be underdiagnosed in clinical practice ( 5 ). In a prevalence study conducted at a headache specialty clinic, 25% of patients complaining of headaches were diagnosed with ON, and most of these patients also had another coexisting headache disorder ( 4 ). When ON coexists with other headache disorders, it can serve to either trigger or worsen other types of headaches. Therefore, inadequate treatment of ON will often result in the exacerbation of increased resistance to coexisting headache disorders, necessitating greater use of medications.
The treatment of ON is multimodal. Physical therapy including postural training can improve symptoms such as muscle tension but is often insufficient ( 6 ). Medication options span various pharmacologic classes including anti-inflammatories, anticonvulsants, tricyclic anti-depressants, selective norepinephrine reuptake inhibitors, muscle relaxants, and CGRP antagonists ( 7 , 8 ). However, the use of medications for headaches has limitations in terms of contraindications, side effects, and inconsistent efficacy ( 9 , 10 ). A response to NB is part of the ICHD-3 diagnostic criteria for ON and can provide therapeutic remissions lasting for days, weeks, months, or even years. In patients with longer-lasting benefits, it is hypothesized that larger volume NB through hydro-dynamic forces can cause an expansion of the muscle, fascia, and other surrounding tissues that may be compressing the occipital nerves ( 11 , 12 ). Onabotulinum toxin A (BTX) has also demonstrated efficacy in treating ON ( 13 , 14 ).
For patients with refractory ON, radiofrequency ablation (RFA), neurostimulation, and surgical nerve decompression are therapeutic options. Although RFA has demonstrated efficacy for the treatment of ON, improvement of pain is often temporary and complications include permanent iatrogenic nerve injury, which can result in reduced effectiveness of subsequent peripheral nerve sparring decompression procedures ( 15 , 16 ). Implanted neurostimulation has also been shown to be able to effectively treat refractory ON and reduce medication use ( 17 ). However, the resolution of pain may only be temporary, and devices have technical limitations and are associated with complications such as electrode displacement, battery replacement, hardware malfunction, and infection ( 18 ).
Nerve decompression surgery is indicated in patients who have failed management with conservative therapies. The principles of occipital nerve decompression surgery include the release of affected occipital nerves at all possible compression points and cushioning of the nerve to avoid scarring ( 19 – 21 ). The occipital nerves can be compressed by fascia, scar, muscle, or vasculature along its trajectory. Most frequently, the occipital nerves are found to be compressed by a thickened overlying trapezius fascia, commonly seen in patients with previous head or neck injury ( 22 ). The symptomatic improvement of ON from nerve decompression surgery has been demonstrated in multiple studies ( 23 – 29 ). Although many patients can benefit from surgery, a detailed analysis revealed a binary distribution of outcomes, with most patients either improving completely or not at all after surgery ( 30 ). This highlights the importance of establishing the correct diagnosis and proper patient selection for nerve decompression surgery to ensure successful outcomes, and conversely, to prevent surgical treatment of patients who are unlikely to benefit from the procedure.
Currently, nerve decompression surgery is underrecognized as a possible treatment option for patients with refractory ON. Education regarding the diagnosis of ON and the role of surgical treatment is necessary to broaden and improve treatment algorithms for ON. The objective of this study is to describe the clinical features, treatment path, selection process, and outcomes of occipital nerve decompression surgery in patients with refractory ON under the care of both neurologists and plastic surgeons. | Methods
Institutional review board approval was obtained at the Massachusetts General Hospital in Boston, Massachusetts, with all patients providing informed consent. This prospective case series includes 15 consecutive patients who were managed and referred by a single-board certified, fellowship-trained headache specialist, and neurologist (PGM) to a board-certified plastic surgeon (WGA) for ON treatment occipital nerve decompression surgery between 2015 and 2022. Patient selection criteria for nerve decompression surgery candidacy were based on five internal selection criteria formulated by the neurologist and plastic surgeon:
Diagnosis of ON based on the ICHD-3 criteria preferably by a headache specialist/neurologist.
At least 15 headache days/month with failure of at least three different oral preventative medications (e.g., anticonvulsants, tricyclic antidepressants, and selective norepinephrine reuptake inhibitors), large volume occipital NB (6 cc of 0.75% bupivacaine without steroids), and at least three cycles of BTX injections.
Investigation of other potential causes of ON-like headaches including cervical spine MRI and evaluation and management by a physical therapist who has expertise in cervical pathology including ON.
An identifiable trigger/pain point along the course of the GON, LON, and/or TON using pain sketches, demarcation by the patient with an index finger, tenderness/Tinel sign, +/- a positive Doppler, suggesting the site of anatomic compression evaluated by the surgeon.
A positive NB response was performed by the surgeon at the presumed site of compression. Nerve block response was defined as at least a 50% relative reduction of ON headache intensity with a duration of at least 24 h.
Following occipital nerve decompression surgery, the treatment outcome was evaluated at 12 months postoperatively in terms of headache frequency (headache days per month), intensity (scale of 0–10), and duration (h), as well as changes in medications, NB and BTX injections. Patients were also questioned about the percent-resolution of ON headaches following surgery based on ON headache frequency, intensity, and duration. Although ON typically involves lancinating pain lasting for seconds to minutes, during a flare, some patients will have volleys of pain, and others will have a baseline steady pain in the same nerve distribution as the ON with superimposed lancinating pain, that is why duration was measured in hours to account for these two common presentations of ON. Patients were also questioned about the effect of nerve decompression surgery on reducing coexisting headaches. Data were collected prospectively using REDCap questionnaires preoperatively and at 12 months postoperatively. Data quantifying the number of preoperative treatments (oral medications, NB, and BTX) were retrospectively collected from chart reviews.
Statistical analyses were conducted using SAS ® software (SAS Institute Inc., Cary, NC). Descriptive statistics of continuous variables were reported using means and standard deviations or median and interquartile range depending on normality. Frequencies and percentages were used for categorical variables. Paired t -tests were performed to compare preoperative and postoperative ON headache frequency, duration, and intensity. A p < 0.05 was considered statistically significant.
Data availability
Anonymized data will be shared by request from any qualified investigator. | Results
All 15 patients were included in the final analysis. Patients were predominantly female (80%). The average age of ON onset was 20 (±8) years. The time between the onset of ON headache and surgical decompression averaged at 19.9 (±13.6) years. Prior to nerve decompression surgery, median headache days per month was 30 (20–30), duration was 24 h (12–24), and intensity was 8 (8–10). ON was bilateral in 13 patients (86.7%) and unilateral in 2 patients (13.3%). All patients were diagnosed with at least one additional headache disorder. Most patients (9 patients, 60%) had coexisting chronic migraine in the form of holocephalic throbbing pain, difficulty concentrating, photophobia, phonophobia, and nausea during attacks. Other headache diagnoses included persistent posttraumatic headache in three patients (20%), new daily persistent headache (NDPH) with migrainous features in two patients (13.3%), episodic migraine in one patient (6.7%), and trigeminal neuralgia in one patient (6.7%). Twelve patients (80%) reported a history of head or neck injury prior to ON onset. Patient demographics and ON headache characteristics are presented in Tables 2 , 3 , respectively.
Preoperatively, all patients trialed at least three different pharmacologic classes of preventive medications, among which three patients (20%) were on no preventative medications prior to surgery due to lack of efficacy and/or side effects. An average of 11.7 (±9) BTX injection cycles were administered per patient. At the time of surgical screening, BTX had been discontinued in one (6.7%) patient and had reduced effectiveness over time in 6 patients (40%). An average of 10.2 (±5.8) NB injection cycles were performed per patient. Among the study patients, the maximum duration of relief after an NB was 3 months, but unfortunately that duration of benefit was not sustained with subsequent blocks, which prompted surgical evaluation. One (6.7%) patient had undergone previous radiofrequency ablation (RFA), which provided only temporary relief. Twelve (80%) patients reported having undergone physical therapy. Other treatment modalities included acupuncture, chiropractic therapy, and meditation with limited improvement. The average duration of medical management by the referring headache specialist before referral to the plastic surgeon for nerve decompression surgery was 3.9 (±2.8) years.
Patients underwent bilateral (13 patients, 86.7%) or unilateral (2 patients, 13.3%) greater occipital nerve (GON) decompression. A thickened trapezius fascia overlying the GON was observed intraoperatively in 12 (80%) patients ( Figure 1 ). Any contact between the GON and the occipital artery was observed in 7 (46.7%) patients, and the contact was extensive in most cases (57%). Simultaneous LON decompression was performed in 9 (60%) patients and TON decompression in 3 (13.3%) patients. The decision to include LON and TON decompression was based on history, physical examination, NB response, and/or intraoperative findings. The mean postoperative follow-up period was 16.8 (±9.7) months.
At 12 months postoperatively, median lancinating ON headache frequency was 5 (0-16) days/month ( p < 0.01), intensity was 4 (0–6) ( p < 0.01), and duration was 10 (0–24) h ( p < 0.01). In terms of lancinating ON pain, the median patient-reported percent resolution was 80% (70%−85%). In 4 (26.7%) patients, 100% resolution was reported, in 5 (33.3%) patients, ≥80% resolution was reported in 4 (26.7%) patients, ≥50% resolution was reported, and in 2 (13.3%) patients, ≤ 20% resolution was reported. All patients reported a reduction or discontinuation of at least one class of medications. In 11 (73.3%) patients, medications were reported to be more effective after surgery as compared to preoperatively. All patients reported increased effectiveness and/or reduced frequency of needed occipital NB or BTX injections after surgery. One patient completely stopped all medications, BTX, and NB.
All patients reported that surgery had helped relieve their concomitant headache disorders, which was in most cases migraine. However, although improved after surgery, patients still continued to experience migraine symptoms such as fronto-orbital pain, difficulty concentrating, photophobia, phonophobia, and nausea during headache attacks. Postoperative outcomes are summarized in Table 4 .
Among the two patients who reported < 50% resolution, the first was diagnosed with ON, persistent posttraumatic headache with migrainous features, and trigeminal neuralgia following multiple direct strikes to the head and concussions. She was known to have intractable head pain necessitating repeated ED visits. She responded to NB with some relief (< 50%) for < 48 h at the time of surgical evaluation. After bilateral GON decompression, the patient initially reported 95% relief at 3 months postoperatively, but her ON headache returned predominantly on the right side, reporting an overall 0% resolution of her ON headaches at 12 months. The patient did, however, report discontinuing antiemetics and increased efficacy of NB. Reoperation was performed with GON transection and nerve end reconstruction on the side of the predominant ON headache and repeat GON decompression on the other side. At 12 months after reoperation, the patient reported a 95% resolution of her occipital pain.
The second patient who reported < 50% resolution was diagnosed with ON, chronic migraine, and cervical radiculopathy status postmicrodiscectomy. The onset of ON was with no known initiating factor or history of head or neck injury. The patient reported intractable pain with multiple ED visits. At the time of surgical evaluation, NB provided significant relief (>50%) but for < 48 h. After 32 years since ON onset, the patient underwent unilateral GON, LON, and TON decompression. At 12 months postoperatively, she reported only a 20% resolution of ON headache with a reduction in the intensity of her ON headaches but not in frequency or duration. Postoperatively, she reported discontinuation of pregabalin, reductions in opioid and antiemetics, and better efficacy of medications. She did not endorse any changes in the use of BTX and NB. This patient did not undergo reoperation; however, given her diffuse body pain, she was seen in a peripheral nerve clinic and was diagnosed with small fiber neuropathy. During 2023 follow-up appointments, this patient's ON and migraine have been under relatively good control with 10 or fewer severe headache days per month as compared to her small fiber neuropathy pain, which has been daily and disabling. | Discussion
This case series (1) described the clinical features of patients with refractory ON amenable to surgical treatment, (2) demonstrated that all patients benefited from occipital nerve decompression surgery to varying degrees, and (3) suggests that the collaborative selection criteria employed by neurologist and surgeon may be replicable in clinical practice.
Clinical features of patients with refractory ON amenable to surgical treatment
The clinical diagnosis of ON is challenging, and its true prevalence is unknown. Although previous studies have reported on the clinical features of ON, few have described the features of chronic refractory cases potentially amenable to surgical decompression ( 4 , 6 , 31 – 36 ). In this case series, patients with chronic refractory ON exhibited all symptoms described by the ICDH-3 diagnostic criteria but also reported additional symptoms that have not been significantly reported in the literature.
First, we observed a higher prevalence of bilateral ON headache compared to unilateral ON headache. While the ICDH-3 acknowledges that ON can manifest as both unilateral and bilateral, previous studies have reported a predominance of unilateral ON, which contrasts with the results from our series ( 33 , 34 , 36 , 37 ). This increased incidence of bilateral ON headache may be attributed to the chronicity of ON in our series or could signify the underlying pathophysiology involving diffuse nuchal myofascial hypertrophy following head or neck injury ( 22 ).
Second, patients frequently presented with pain radiating from the occipital to the fronto-orbital regions, which can be clearly visualized with the use of pain drawings. Such distribution of pain has been documented by others and is thought to be due to either extracranial anastomoses of the occipital nerves with trigeminal nerves or due to referred pain mechanisms at the level of the trigeminocervical complex ( 34 , 36 ). Since frontal pain may also manifest in other headache disorders, its presence may contribute to instances of ON misdiagnosis or underdiagnosis.
Third, in addition to experiencing short paroxysms of lancinating pain, shooting, stabbing, or sharp in quality and lasting from a few seconds to minutes, patients in our series presented with constant achy pain in the same distribution. In clinical practice, this is commonly seen with ON as well as in other paroxysmal pain disorders such as sciatica and trigeminal neuralgia. This is why the ICHD-3 diagnostic criteria for trigeminal neuralgia include subtypes of the purely lancinating form and the form that has a steady baseline pain in the same distribution as the superimposed lancinating pain ( 3 ). Aching, pressure, pounding, or throbbing sensations, which were at times extra-occipital, may have been a manifestation of coexisting chronic migraine or other headache disorder. The protracted ON headache experienced in these patients could be due to the chronic nature of the condition after a duration of symptoms on average 19.9 years and/or central sensitization ( 38 ). Alternatively, it could be indicative of a constant compression point by the surrounding tissue. In contrast, episodic ON would be less likely to be associated with anatomical compression ( 22 ).
Furthermore, this study highlighted the high prevalence of coexisting headache disorders among patients with ON. The coexistence of ON with other primary headache diagnoses such as migraine may be an underrecognized phenomenon and has been reported to be seen in up to 25% of patients presenting to a headache specialty clinic ( 4 ). In our case series, all patients had coexisting headache disorders. However, distinguishing multiple headache disorders may be challenging due to a significant overlap of clinical features between ON, migraine, cervicogenic headache, cluster headache, and tension headache ( 3 ). Nevertheless, distinguishing ON from other headache diagnoses is important because the treatment is vastly different. Therefore, patients presenting with headache disorders should be screened for ON.
All selected patients benefitted from nerve decompression surgery
We found that all patients in our series benefited from nerve decompression surgery to varying degrees. Resolution of lancinating ON headache was evidenced by reductions in frequency, intensity, and duration of headaches, as well as significant reductions and/or increased effectiveness of medications, BTX, and NB at 12 months.
Nerve decompression surgery has been previously shown to be an effective treatment option for refractory ON ( 24 – 29 , 39 – 43 ). Studies that have analyzed outcomes following nerve decompression surgery for headaches have mostly reported reductions in headache intensity of −4 points and reductions in headache frequency of −7 and −20 headache days/month, which conform with our findings ( 25 , 26 , 29 , 39 , 44 ). Less has been reported on changes in the duration of headaches, but existing reports have found significant reductions similar to our study ( 45 ). Moreover, the success rate (>50% resolution in ON headache) following occipital nerve decompression surgery has been reported to be approximately 80% (range 68–95%), in line with our results ( 26 , 42 – 44 , 46 ). Furthermore, the decrease in postoperative daily medication use after nerve decompression surgery corroborates with the findings of several other studies ( 43 , 44 , 47 ). The improvement in postoperative efficacy of medications, BTX, and NB suggests that ON may act as a barrier to response to these therapies, and nerve decompression surgery likely indirectly improves coexisting headache disorders.
It could be posited that, given the substantial disease burden and the treatment-resistant nature of all patients in this series prior to surgery, this particular cohort of ON patients may exhibit a higher degree of surgical failure as compared to patients in other studies. However, our findings did not align with this expectation. Not only did patients experience resolution of ON (both lancinating and baseline occipital pain) but patients also reported reductions in comorbid headache disorders, most frequently migraine.
In our series, two patients reported < 50% resolution of ON headache at 12 months postoperatively. However, as illustrated in the first patient, revision surgery may be needed at times to achieve >50% resolution. Both cases prompt us to better understand risk factors associated with poor outcomes or reoperation.
Risk factors have been previously described in the literature and include poor NB response, atypical pain drawings, RFA, and cervical spine disorders. While most patients in our series had days to weeks of ON headache relief with NB at the time of screening, both patients had the lowest duration lasting < 48 h. It has been reported that a NB response of < 24 h is associated with worse outcomes following nerve decompression surgery ( 48 ). An absence of NB response would conflict with the diagnostic criteria of ON set forth by the ICDH-3 and can be a relative contraindication to surgery. Atypical pain sketches have also been shown to predict poor surgical success, while a history of RFA and cervical spine disorders have been shown to be associated with a higher number of revision surgeries and nerve transections to achieve acceptable outcomes ( 49 – 51 ). Suboptimal outcomes in these patients may also be due to incomplete decompression during primary surgery and/or subsequent scar tissue formation. A better understanding of risk factors for poor outcomes will aid in refining patient selection criteria.
Collaborative patient selection criteria for nerve decompression surgery
The favorable outcomes observed following occipital nerve decompression surgery in our cohort suggest that the collaborative selection criteria employed in this study could be replicable in clinical practice. The listing of these criteria is not arranged in a temporal order and need not be considered in any particular order without overlap. For example, during the initial visit, the patient may receive a NB, a prescription for gabapentin that can be initiated by the patient depending on NB response, as well as a referral for physical therapy.
The selection criteria for surgical candidacy, which involve assessments by both a headache specialist and a surgeon, highlight the importance of a multidisciplinary approach for optimal treatment of patients with refractory ON. While the headache specialist/neurologist plays a critical role in the diagnosis of refractory ON and in providing the best preoperative management before referral, the surgeon evaluates potential compression sites that can be surgically addressed to alleviate nerve compression symptoms. This relationship is bidirectional, as the surgeon is to refer patients, especially those who are self-referred or referred by their primary care physician, to a headache specialist/neurologist to confirm the diagnosis of ON and ensure that the patient has failed the best medical/interventional management before proceeding with surgery. These multidisciplinary selection criteria may guide the development of future treatment algorithms for patients with refractory ON.
Limitations
This study was limited by the lack of a control group, challenging the establishment of a cause-and-effect relationship between the surgical intervention and the observed outcomes. Although there are several studies in the literature highlighting the significant morbidity associated with ON as well as numerous studies reporting on the effectiveness of nerve decompression surgery, the risk of a placebo effect is particularly high in headache patients and should be acknowledged. | Conclusion
In conclusion, ON can cause disabling headaches and may be highly underdiagnosed.
The frequent coexistence of ON with other headache disorders likely contributes to its underdiagnosis. Therefore, screening for ON should be considered among other headache patients to improve ON diagnosis and treatment.
Furthermore, we demonstrated that for patients with ON that is refractory to conservative therapies, occipital nerve decompression surgery can be an effective treatment, improving lancinating ON pain as well as the effectiveness of medications, NB and BTX. Additionally, surgical treatment of ON was able to improve coexisting headache disorders such as migraines.
Therefore, the collaborative selection criteria employed in this study neurologist/headache specialist and surgeon may be replicable in clinical practice. Future efforts should be made to include and define occipital nerve decompression surgery within current and future ON treatment algorithms. | Edited by: Sabina Cevoli, IRCCS Institute of Neurological Sciences of Bologna (ISNB), Italy
Reviewed by: Raffaele Ornello, University of L'Aquila, Italy; Umberto Pensato, Humanitas Research Hospital, Italy
Background
The management of refractory occipital neuralgia (ON) can be challenging. Selection criteria for occipital nerve decompression surgery are not well defined in terms of clinical features and best preoperative medical management.
Methods
In total, 15 patients diagnosed with ON by a board-certified, fellowship-trained headache specialist and referred to a plastic surgeon for nerve decompression surgery were prospectively enrolled. All subjects received trials of occipital nerve blocks (NB), at least three preventive medications, and onabotulinum toxin (BTX) prior to surgery before referral to a plastic surgeon. Treatment outcomes included headache frequency (headache days/month), intensity (0–10), duration (h), and response to medication/injectable therapies at 12 months postoperatively.
Results
Preoperatively, median headache days/month was 30 (20–30), intensity 8 (8–10), and duration 24 h (12–24). Patients trialed 10 (±5.8) NB and 11.7 (±9) BTX cycles. Postoperatively, headache frequency was 5 (0–16) days/month ( p < 0.01), intensity was 4 (0–6) ( p < 0.01), and duration was 10 (0–24) h ( p < 0.01). Median patient-reported percent resolution of ON headaches was 80% (70–85%). All patients reported improvement of comorbid headache disorders, most commonly migraine, and a reduction, discontinuation, or increased effectiveness of medications, NB and BTX.
Conclusion
All patients who underwent treatment for refractory ON by a headache specialist and plastic surgeon benefited from nerve decompression surgery in various degrees. The collaborative selection criteria employed in this study may be replicable in clinical practice. | Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving humans were approved by Massachusetts General Hospital, Boston. The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
Author contributions
WA: Conceptualization, Methodology, Project administration, Resources, Validation, Writing—review & editing. KR: Data curation, Formal analysis, Writing—original draft. KP: Data curation, Writing—original draft. MH: Data curation, Writing—original draft. LG: Conceptualization, Methodology, Project administration, Supervision, Validation, Writing—review & editing. PM: Conceptualization, Methodology, Project administration, Supervision, Validation, Writing—review & editing. | Conflict of interest
PM was employed by Harvard Vanguard Medical Associates. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | CC BY | no | 2024-01-16 23:36:46 | Front Neurol. 2023 Nov 28; 14:1284101 | oa_package/50/b3/PMC10713752.tar.gz |
PMC10720392 | 38095857 | INTRODUCTION
Statistical mechanics describes the behavior of large numbers of physically identical systems [ 1 ]. Molecular dynamics (MD) is the computational application of statistical mechanics to molecular systems such as proteins, nucleic acids and lipid membranes [ 2–5 ]. The fundamental postulate of statistical mechanics is that every energetically accessible microstate of the physical system is equally probable; a microstate is a partition of the total energy of the physical system to each coordinate of its Hamiltonian [ 1 ]. When many identical copies of the physical system are present such as in molecular systems at equilibrium, experimental observations reflect the overall probability distribution of microstates.
The goal of MD is to computationally sample enough microstates of a system of molecules to approximate the distribution of microstates in a biological system at equilibrium, in which there may be on the order of Avogadro’s number ( ) molecules. For MD, statistical equilibrium is defined as the NPT ensemble [ 6 ], in which the number of particles, pressure and temperature are fixed, and the underlying microstate probability distribution is the Boltzmann distribution for the enthalpy.
Traditional MD attempts to sample microstates by integrating Newton’s second law according to empirically determined molecular force fields [ 2 , 3 ]. The underlying major assumption is that the MD trajectory is ‘ergodic’ [ 6 ], that is, given enough time steps, the trajectory will visit all microstates with a frequency given by the Boltzmann distribution. However, there is no guarantee that a given MD trajectory will be ergodic. Transitioning between states that are separated by large energy barriers presents a significant challenge for MD simulations [ 7 ]. Numerous approaches have been proposed to address this shortcoming of MD, such as Monte Carlo methods [ 8–11 ], metadynamics [ 12 ] and umbrella sampling [ 13 ]. Recently, Boltzmann generators (BGs) have emerged as a promising candidate for replacing MD [ 14 , 15 ].
Foundational methods were proposed by Noé et al . [ 14 ] to use generative neural networks for the sampling of microstates. The central idea is that instead of predicting a single trajectory as in MD, one may instead train a neural network to predict Boltzmann-distributed states. This approach seeks to train a neural network to learn multiple energy minima simultaneously. While Noé et al . successfully demonstrated their method for simple physical systems and small proteins, there were critical theoretical and practical deficiencies limiting the application of their methods that we address in this work.
The theoretical deficiencies in the original BG framework we address are various biases in angle generation due to (i) the use of a Gaussian ansatz for molecular degrees of freedom and (ii) the regularization of a discontinuous output. The practical deficiencies we address are (iii) tight coupling between energy and entropy estimation, necessitating millions of evaluations of an external molecular force field, (iv) potential numerical instabilities due to reliance on eigendecomposition and (v) inefficiencies in the generation of rotamers. We will describe how we address these five deficiencies in the results, with detailed and technical discussions relegated to the Methods.
In this work, we demonstrate that decoupling the energy and entropy training losses and propagating forces directly from the molecular force field reduces the needed evaluations of the force field by a factor of a thousand; we achieve sampling comparable to traditional MD with only ~10 [ 3 ] evaluations of the force field for chicken villin headpiece (PDB ID 1VII), a 35-residue protein domain. We demonstrate a simple method of gradient propagation for an arbitrary external force field, and we implement the AMBER 14 force field [ 2 ] in pure PyTorch [ 16 ], as is done in the TorchMD framework [ 17 ]. We include the Generalized Born implicit solvent [ 18–22 ], which is not present in TorchMD [ 17 ]. We suggest strategies to avoid numerical instabilities and intrinsic biases in the neural network, and we propose a code-efficient method of rotamer sampling that accommodates arbitrary molecules while remaining end-to-end differentiable for neural network training. We also present a highly parallel and memory-efficient version of the rotamer sampling algorithm. The result of these improvements is a numerically robust and fast architecture for BGs. | METHODS
Theoretical considerations for Gaussian versus non-Gaussian inputs to Boltzmann generators
One major theoretical limitation of traditional MD that carries over to BGs is difficulty in sampling disconnected local energy minima (i.e. metastable states). Fundamentally, the neural networks used in BGs are differentiable models that generate molecular internal coordinates from latent variables and are therefore continuous functions from to . The BGs originally proposed by Noé et al . [ 14 ] generate internal coordinates from the sampling of a single multidimensional Gaussian distribution
centered at with unit standard deviation in each coordinate. In Noé et al ., and were both set to three times the number of atoms in the protein (i.e. the 3D coordinates). This distribution is spherically symmetric; however, in high dimensions, the volume of the unit -ball tends to 0 even for modest values of , implying that the density of the multidimensional Gaussian distribution is highly concentrated near the origin. Meanwhile, the probability density of the molecule’s Boltzmann distribution is highly concentrated in disjoint regions of since the energy minima of a molecule are separated by high energy barriers (low Boltzmann probability). Since tends to the origin in the latent space, we are asking the neural network to approximate a one-to-many relation, which is not a function, let alone a continuous one. Instead, it may be beneficial to sample from a sum of Gaussians
and to require that the result of sampling from distinct regions results in internal coordinates and that are also disjoint, such as through a repulsive loss on those pairs of . This method would be analogous to metadynamics sampling [ 12 ] in which previously generated molecular states are avoided and formulated in the distributional sense. This method is also analogous to k -means clustering in which each centroid is responsible for representing a single cluster in the data. Alternatively, one could use an ensemble of neural networks, with each neural network responsible for generating Boltzmann-distributed states for a single energy minimum and its neighborhood of conformations.
Differentiable force fields
Using OpenMM 7.7 [ 23 ] as our reference, we rewrote force field terms from OpenMM in terms of pure PyTorch operations, allowing for automatic differentiation of the molecular energy without the memory overhead of repeatedly transferring positional data between PyTorch [ 16 ] and OpenMM. Our implementation creates a custom PyTorch function for a provided molecule, which stores the computational graph necessary to reproduce its energy.
If such an implementation is not available for a custom energy function, one may refer to the Supplemental Methods ( Ad Hoc Propagation of Forces, Physical Interpretation of U new ) and Algorithm S1 : FFDiff.
A comprehensive and technical accounting of our methods is found in the Supplemental Methods (Rewriting Molecular Force Fields to be End-to-End Differentiable, Avoiding Singularities in the Energy Function During Backpropagation). We verified that energies and gradients from our PyTorch implementation of the AMBER force field matched that of OpenMM to within 5% (Data and Materials Availability).
Preparation of macromolecule graph metadata for rotameric sampling
Given an arbitrary macromolecule whose atoms are covalently bonded into a single connected structure, we sought to modify only the dihedral angles.
First, we created an undirected graph [ 24 ] of the macromolecule, with atoms as the nodes and covalent bonds as the edges .
Second, many macromolecules contain cycles that reduce the number of degrees of freedom by one, so we performed a depth-first search (which generates a tree) to break cycles at a single bond and retaining all other bonds as dihedral degrees of freedom: .
Third, we removed all leaf nodes and edges terminating on leaves since the bonds corresponding to such edges do not represent a dihedral angle: ). For each remaining edge , we assigned an output of the neural network to control the dihedral angle for that bond.
Finally, for each dihedral edge , we used depth-first search to calculate the two connected components of that result with the removal of . We recorded the smaller of the two connected components, , as the atoms which we would rotate about the dihedral axis represented by the .
This method is summarized in Algorithm S2 : PrepRot.
Differentiable rotamer sampling
We started with the matrix of positions ( ) of all the atoms in the macromolecule. For each of the angles predicted by the neural network, we selected the corresponding edge , where and represent the two atoms forming the covalent bond. We calculated the axis of rotation with components
using the positions of atoms and , which have components and . We did not prefer a specific orientation for each rotation axis since we predicted the full range for each . We also computed the centroid of the bond as the origin for rotation:
We normalized to a unit vector, . We used Rodrigues’ rotation formula [ 25 ] for rotation about an axis by an angle, thereby computing a rotation matrix :
We then took the previously calculated connected component , translated all the particles in that connected component by , applied the rotation matrix and translated the selected particles back by :
After performing this transformation for the dihedrals we wish to sample, we return our final position matrix as the output, .
Our method of differentiable rotamer sampling is summarized in Algorithm S3 : DiffRot.
Alignment of point clouds
There were two methods we could have used to align point clouds. The first was the well-known Kabsch algorithm [ 26 ], and the second was through an alternative approach using unit quaternions [ 27 , 28 ]. We chose to employ the second method since it reduced to a largest magnitude eigenvalue/eigenvector problem. The mathematical details for this method are found in the Supplemental Methods (The Kabsch Algorithm and A Quaternionic Alternative).
The quaternion approach is summarized in Algorithm S4 : QuaternionAlignment.
Parallelization of differentiable rotamer sampling
To take advantage of the parallel computing potential of GPU and TPU backends, we followed an approach like that of AlQuraishi’s parallelized natural extension reference frame (pNeRF) algorithm [ 29 , 30 ]; however, our method generalizes to all rotational degrees of freedom. The algorithm is described in detail in the Supplemental Methods (Technical Details of Parallelized Differentiable Rotamer Sampling). We also discuss the computational advantage of this method in the Supplemental Methods (Memory Advantage of Parallelizing the Differentiable Rotamer Sampling Method).
Our parallelized version of differentiable rotamer sampling is summarized in Algorithm S6 : DiffRotParallel. For practical purposes, lines 1–21 in Algorithm S6 only need to be executed once, and the rotamer sampling from lines 22 to the end may be placed in a separate function.
Bias-free, continuous representation of dihedral angles
We used simple feedforward neural networks, with a continuous representation of angles for the output [ 31 ]. To avoid biasing predicted angles, we did not predict angles directly. For each dihedral angle , we used our neural network to predict two parameters, , and calculated . and its derivative are well defined for all in all four quadrants and produces an angle in the range . To regularize our neural networks and prevent from drifting to the origin or infinity, we added a loss to our training cost, with weight :
This training cost has the advantage of being rotationally symmetric so that there is no preferred angle. For comparison, the regularization loss
will bias every angle toward since the network is penalized for exploring the space near and 14 . In simple terms, the penalization of and results in a bleed-through effect on , especially near the limits. We demonstrate this bias through a simple Markov chain [ 32 ] model of training in the Supplemental Methods (A Simplified Demonstration of Biased Sampling due to a Discontinuous Mapping).
Estimation of entropy
For estimation of distribution entropy of our Boltzmann generators, we used a method tailored for multivariate circular distributions [ 33 ], which attempts to mitigate correlations among the angles. The metric for two sets of angular samples was defined as the arclength on the unit circle for each angle
Given a batch of samples , we then computed the nearest neighbor for each sample in the batch according to the metric , and estimated the entropy of each sample as (Eq. 17 in the original manuscript [ 33 ], with first nearest neighbors corresponding to )
and averaged over the entire set of samples . Since we were only using the entropy to provide approximate gradients to the Boltzmann generator, we ignored all constants that corrected for bias to the numerical value of the entropy in the original formula. | RESULTS
We begin with an overview of the changes we made to the BG framework to address the five issues enumerated in the introduction. We first address the theoretical concerns (i) and (ii), then we discuss aspects of our implementations addressing practical concerns (iii)–(v), and finally we discuss the results and basic benchmarking of our proposed methods.
(i) A Gaussian ansatz for the latent space is biased toward the origin
BGs are continuous functions between a latent space of random variables and a target space representing physical configurations, so we will denote elements of the latent space by , the BG by and the generated physical configurations by .
The Kullback–Leibler (KL) divergence [ 34 ] measures differences between the generated and the true Boltzmann distribution . BGs are therefore trained to minimize the KL divergence, to reproduce the physical configurations expected from statistical mechanics [ 14 ]. The original BG framework constrains the functional form of in a variety of ways to ensure that the estimate of the KL divergence is directly computable, with fixed formulas for all the terms involved, such as , and , where is the Jacobian determinant of . The first constraint on the functional form of is that the input for is a vector of Gaussian random variables.
In molecular systems, there are often multiple metastable/low-energy states, which are represented by high-probability islands of the target space which are separated by a sea of low-probability states. We therefore face a fundamental difficulty, if we seek to find functions that map a Gaussian vector to such a target space. In high dimensions, Gaussian vectors are highly concentrated in a single location: the origin. This can be seen through the volume of a unit ball [ 35 ], which is exponentially suppressed with the dimension, . Since Gaussian vectors are spherically symmetric, the proportion of configurations they represent as compared to the volume of the entire target space also falls exponentially. This implies that the Jacobian determinant of must be exponentially large with the dimension so that the generated physical configurations are not stuck near a single state, which in practice leads to numerical instability. For large molecular systems like proteins, even a coarse-grained model rapidly leads to a large value of since we must predict many degrees of freedom, thus leading to high suppression of . Additional discussion may be found in the Methods (Theoretical Considerations for Gaussian Versus Non-Gaussian Inputs to Boltzmann Generators).
(ii) Naïve angle sampling by transforming a single variable is discontinuous
In the original BG framework, angles were generated using a single random variable, and the BG was trained to produce values in for each angle, for example for a backbone dihedral angle in a protein. The physical reality is that angles should be periodic, that they fundamentally lie on circles. Since the BG is a continuous function, we, therefore, require that they map the latent space to the target space continuously; the transformation introduced in the original BG framework is discontinuous because it seeks to map a line segment to the circle [ 31 ].
Even ignoring this fundamental difficulty, the original BG framework trained the network to produce angles by penalizing the network for producing outside of . This method is biased, even for angles within that range. By penalizing angles produced outside of the range, the original BG framework penalized angles close to the limits by making it difficult for the network to produce angles close to 0 and , intuitively because producing angles too close to the limits leads to a greater likelihood that the network will be penalized in the next training step. In other words, the network is in danger of producing invalid outputs if we allow it to stray too close to the boundary. We demonstrate this bias against producing angles close to the boundary in the Supplemental Methods (A Simplified Demonstration of Biased Sampling due to a Discontinuous Mapping). Using a simple, discrete Markov chain model of training, we explicitly show that when the network is fully trained (i.e. the training process is at equilibrium), the probability of the network producing a given angle is reduced near the boundaries relative to the center of the interval.
Resolving (i) and (ii) using bias-free, continuous representation of angles
We show in the Methods (Bias-Free, Continuous Representation of Dihedral Angles) how to resolve both (i) and (ii), a mathematical prescription introduced specifically for 2D angle generation [ 31 ]. In the original BG framework, each angle corresponds to a single output of . In the bias-free continuous method, each angle is instead represented by two outputs, and , where we have suggestively labeled the generated numbers and to imply and . We then calculate
If we assume that and are drawn from distributions centered at , and and are similarly initially unbiased, then is also drawn bias-free from , and training of and proceeds in an unbiased fashion, with no difficulties near the interval boundaries. Furthermore, the gradient of is well defined and bounded:
and therefore backpropagation through does not introduce an exploding gradient problem.
Finally, the rotameric degrees of freedom of a molecule encompass the space . Since and are assumed to be rotationally symmetric initially, the resulting angles from Eq. (52) represent the uniform distribution on . Therefore, we only require the Jacobian determinants of and to be , and we avoid the exponential scaling problems found in the original BG framework.
The general architecture of our method is shown in Figure 1 .
(iii) Decoupling entropy estimation and energy function evaluation improves training speed by orders of magnitude
In the original BG framework, energy and entropy estimation are performed in a single calculation, necessitating as many calls to the energy function as calls to the neural network to generate physical configurations. The calls to the energy function are especially burdensome when the energy function is not implemented in the same framework as the neural network, causing a memory transfer bottleneck in (1) transferring generated coordinates from the neural network to the external energy function and (2) transferring energies and gradients from the external energy function to the neural network. Even when the energy function is written in the same framework as the neural network, the evaluation of a molecular force field is computationally much more expensive than coordinate generation by the neural network. We found that decoupling the energy portion of the loss function from the entropy estimation portion of the loss function resulted in many orders of magnitude faster training, with far fewer evaluations of the energy function necessary.
(iv–v) Eigendecomposition-free fragment assembly for efficient rotamer sampling
The original BG framework used singular value decomposition to remove the translational and rotational degrees of freedom encountered during sampling. Because we sample the rotameric degrees of freedom directly, we do not need to perform such eigendecomposition, which can be numerically unstable when backpropagating gradients, as the documentation for PyTorch’s torch.linalg.svd indicates [ 16 ].
In this work, we implement direct sampling of rotameric degrees of freedom in macromolecules ( Figure 2 ), described in the Methods (Differentiable Rotamer Sampling) and associated Supplemental Methods . High-energy degrees of freedom such as bond lengths and bond angles are kept fixed, or ‘frozen out’ in the language of statistical mechanics, since they are typically inaccessible to the molecular system in its native biological environment. We demonstrate that our methods reflect traditional MD ( Figures 3 – 5 ), which we will discuss more carefully in the sequel.
Our method of direct sampling of rotamers can be parallelized (Methods, Parallelization of Differentiable Rotamer Sampling) by breaking up the molecule into fragments ( Figure 2A ), performing rotamer sampling for each fragment independently and in parallel ( Figure 2B ), and reassembling the fragments through 3D alignment ( Figure 2C ). This parallelization results in large improvements in performance, particularly with GPU acceleration ( Figure 6A–D ).
For assembly of the fragments, instead of relying on eigendecomposition to perform 3D alignment through the well-known Kabsch algorithm [ 26 ], we use an alternative method that relies only on finding the largest eigenvalue and associated eigenvector of a certain matrix [ 28 ], rather than finding a complete eigendecomposition (Methods, Alignment of Point Clouds). We also develop a method (Methods, Differentiable Largest Eigenvalue and Associated Eigenvector of a Square Matrix) to find the largest eigenvalue and associate eigenvector quickly and stably, which is well suited for automatic differentiation as compared to singular value decomposition ( Figure 2D ).
Finally, we implement the molecular force field completely in PyTorch ( Figure 6E and F ), avoiding the memory transfer bottleneck between the energy function and neural network components of the architecture mentioned previously.
Basic benchmarking of differentiable rotamer sampling with molecular force fields
In the following sections, we will describe our initial evaluations of our proposed adjustments to the BG framework. See the Supplemental Methods for additional details on ‘Learning rate tuning’, ‘Neural network architectures’, ‘Traditional MD’ and ‘Order parameters’ such as RMSD and RMSF.
Initializing Boltzmann generators at the native state
We tested the effect of pre-training the neural networks to output , or in other words to produce the identity function on the structure. For these experiments ( Figure 3A and B ), we trained the neural network for epochs with the loss
where includes fixed terms such as the angle modulus loss and weight decay regularization.
We observed that initial pre-training was crucial to allow neural networks to converge. With no pre-training epochs, we were unable to train neural networks within 5000 epochs, and all structures produced appeared highly unphysical, with numerically infinite energies (not shown). However, any number of pre-training epochs above 100 appeared suitable for initialization of the neural networks ( Figure 3A ).
Sampling without entropy
When we trained neural networks solely on the energy and not the entropy ( Figure 3A and B )
we observed that while the intrinsic noise in the Adam optimizer allowed for sampling of states that were not global energy minima ( Figure 3A ), the resulting neural networks did not reproduce the results of traditional MD at non-zero temperature ( Figure 3B ).
Temperature and entropy, and effect of training length on structure generation
We observed that by estimating the multivariate circular distribution entropy and training to maximize the entropy and minimize the energy, we were able to reproduce traditional MD protein backbone root-mean-square fluctuations ( Figure 3C–H ), as well as prevent mode collapse ( Figure 4A–C ) and sample Boltzmann-distributed states ( Figure 4D–I ). For this set of experiments, we used the training loss
where is the temperature of the system ( Supplemental Methods , Estimation of Temperature of Boltzmann-Generated Samples). We chose this form of the loss to maintain the relative contributions and dynamic range of the energy function and entropy , while preventing exploding gradient contributions from dividing by a small or multiplying by a large . As previously discussed, we estimated entropy independently from energy function evaluations; we used a nearest neighbor procedure tailored for measuring the entropy of multivariate circular distributions (Methods, Estimation of Entropy) [ 33 ]. In the units of our implementation,
with the ideal gas constant and physical temperature measured in Kelvins . A human body temperature of yields a numerical value of . However, since our estimate of the entropy is only correct asymptotically, the actual numerical values for the temperature may differ in practice. Interestingly, it appears that reproducing the results from traditional MD is a matter of fine-tuning both the length of neural network training and temperature ( Figures 3C and 4D–I ).
Training networks with non-Gaussian priors and comparing stochastic gradient descent versus the Adam optimizer
In the preceding sections, we used Gaussian noise as the initial input of our models. However, as we discussed in issue (i), multivariate Gaussians [ 35 ] may significantly limit the conformations we are able to sample (Methods, Theoretical Considerations for Gaussian Versus Non-Gaussian Inputs to Boltzmann Generators). We therefore examined for differences in RMSF accuracy based on the input noise; we also tested for differences between the Adam optimizer and stochastic gradient descent ( Figure 5 ). We found no empirical difference among Gaussian and non-Gaussian input noises, and we found that stochastic gradient descent was inferior to Adam for reproducing RMSFs calculated by traditional MD.
Benchmarking
We performed basic benchmarking of the rotamer sampler, the parallelized version of the rotamer sampler and the force field on the CPU-only of an M1 Max MacBook Pro with 64 GB RAM, and on an NVIDIA Tesla T4 GPU with 16 GB RAM ( Figure 6 ). These benchmarks were performed on the combined forward and backward passes through the computational graph, to imitate real-world usage. We observed an advantage of up to 10× on the GPU in terms of total throughput of rotamer sampling ( Figure 6A and B ). We also compared the original non-parallel dihedral sampler ( Algorithm S3 ) to the parallel dihedral sampling, running on the NVIDIA GPU ( Figure 6C and D ), which showed a performance advantage of 4× on the protein we used for these experiments. Finally, we demonstrated that the energy function also demonstrated a performance benefit running on the NVIDIA GPU compared to CPU, with 10× greater performance ( Figure 6E and F ). | DISCUSSION
In this work, we addressed five categories of fundamental deficiencies in the BG framework. In deficiencies (i) and (ii), we addressed sources of neural network bias by using continuous sampling of rotameric states. Since we have addressed these theoretical issues in the Results and Methods, we will focus on the practical advantages of our proposed methods in deficiencies (iii)–(v).
Computational throughput improvements in the BG framework
By decoupling energy minimization from entropy maximization, we were able to perform Boltzmann sampling with three orders of magnitude fewer calls to the energy function than in the original BGs. While the original BG framework required millions of evaluations of the energy function, we only required thousands to tens of thousands to reproduce traditional MD results in the form of residue-wise RMSF ( Figures 3 – 5 ). Because we included the implicit generalized Born solvent, we were able to avoid the use of explicit water while simulating proteins in biologically relevant conditions.
In the original BG framework, all atomic coordinates are predicted [ 14 ]. In contrast, we postulated that many of the degrees of freedom are unimportant to protein dynamics, and so we selectively sampled internal degrees of freedom (i.e. the dihedral angles), explicitly freezing out all other modes in the molecule. A side effect of our approach was that we did not need to manually remove modes such as overall molecular rotation/translation through eigendecomposition. Though we no longer have a closed form expression for the entropy, we gain an enormous computational advantage in the neural network.
Model size reduction in the BG framework
A major contributing factor to our computational advantage was that our networks handle the same protein systems with far less memory and fewer trainable parameters than the original BG framework. The BG framework required that the entire neural network be invertible so that exact gradients for their KL divergence between Gaussian priors and Gaussian posteriors may be backpropagated. Thus, each layer of the original BG neural networks has the same dimensions, otherwise the Jacobian determinant of the network would vanish and the neural networks would no longer be invertible. Even for small proteins like the chicken villin headpiece we studied, , and thus the neural networks quickly grow in number of trainable parameters. If we restrict the number of hidden units in any layer to a value less than , then the Jacobian determinant of the neural network transformation will immediately become since the transformation is no longer full rank. Therefore, the number of parameters in an invertible network is at least (3 196 944 for chicken villin headpiece, not including bias parameters). In practice, we require multiple hidden layers with just as many parameters to gain sufficient approximation power for the neural network, resulting in networks with tens of millions of trainable parameters, even for modestly sized proteins. Since we selectively sampled degrees of freedom that are normally not frozen out at biologically relevant energies, we were able to reduce the computational burden and memory-footprint of the neural networks used. To be concrete, we consider the example given by our results. For backbone dihedral sampling, we required only a prediction of the angles for each of the 36 residues in chicken villin headpiece, leading to 10-layer, fully connected models that had 196 692 trainable parameters. The equivalent model in the original BG framework would require roughly 100× as many trainable parameters.
Memory and hardware acceleration of the BG framework
By writing both the rotamer sampler and the force field in pure PyTorch, we were able to leverage the automatic computational graph optimization, GPU acceleration and automatic differentiation features of PyTorch. By using pure PyTorch, we avoided the burden of copying data back and forth from an external energy function to PyTorch. We provide benchmarks for our algorithms, which demonstrate high throughput ( Figure 6 ). In terms of performance, it is not surprising that the GPU outperformed the CPU significantly, given a large enough batch size ( Figure 6 ). We also observed the expected plateau in performance increase due to the saturation of the CUDA driver with simultaneously executing kernels. We explain the theoretical advantage of the parallelized rotamer sampler over naïve rotamer sampling in the Methods (Parallelization of Differentiable Rotamer Sampling with improved memory usage). Our energy function was not completely optimized since many of the pairwise computations were computed for both the upper and lower triangles of the distance matrix, resulting in a 2-fold redundancy in certain calculations that are symmetric in the particle order. We encountered difficulty with the limited memory on the GPU (the NVIDIA Tesla T4 only has 16 GB of video RAM), particularly with the energy function, whereas the CPU had access to 64 GB of RAM at the cost of compute speed. We did not perform benchmarks on the M1 Max GPU because the PyTorch backend for Apple’s Metal Performance Shaders is not complete.
Analogy of BGs with traditional MD
In the case of classical molecular force fields, the high energy barriers tend to be a result of physical singularities in Lennard–Jones and Coulomb potentials at low distance due to internuclear forces, with Lennard–Jones repulsion ( dominating any electrostatic force ( ) at low distance. The enormous Lennard–Jones repulsion at low distances is due to the Coulomb barrier, from positively charged nuclei coming into contact; we must therefore avoid generating states in which nuclei significantly overlap. In traditional MD and in Monte Carlo methods, umbrella sampling is used to regularize the singularities; we implicitly implemented umbrella sampling in this work by regularizing the Lennard–Jones forces at low distances. In Noé et al . [ 14 ], a logarithmic regularization is performed on the total energy. In addition, the force field is further regularized by the neural network training process, such as through gradient clipping [ 36 ], dropout [ 37 ] and weight penalties [ 16 ]. Such methods of regularization are necessary for neural network convergence (also see Supplemental Methods , Avoiding Singularities in the Energy Function During Backpropagation); therefore, BGs as originally presented are umbrella sampled.
Despite their shortcomings, BGs maintain at least one significant advantage over traditional MD with umbrella sampling. BGs implicitly retain some memory of the entire training trajectory, and therefore they may reuse knowledge of the structure between different energy states. For example, an ideal BG may learn the correlations among the internal coordinates, correlations which may hold between distinct energy minima.
Limitations and challenges
Significant challenges remain in the practical use of BGs. Even though we were able to reproduce RMSFs with high correlation ( Figure 3C ), the resulting structures still resemble the native state. We found that the energy landscape of the protein was a highly sensitive function of both the temperature and learning rate; fine-tuning of both appears to be required to produce useful results. However, the methods we present in this work make the fine-tuning process more easily accessible to researchers. Future work may also examine the effect of neural network architecture and other hyperparameters on angle generation since we did not study that effect here. Finally, it may be possible to further accelerate certain computations with a field-programmable gate array, which can be configured on a per-protein basis to perform rotamer sampling and energy computation.
In this work, we only performed limited benchmarking of a single protein, focusing on carefully analyzing the theoretical and practical improvements in BG methods. Extensive testing on other macromolecular systems and benchmarking with different force fields is necessary to draw solid conclusions about BGs as applied to MD. However, at the current stage, our primary focus was to establish the theoretical foundation for our approach and demonstrate its potential. The upcoming phases of our research will encompass the extensive testing required to solidify our method’s practical utility. In addition, our improvements did not address the issue of bias in choosing initial trainable parameters for a BG; i.e. we still needed to train the model to mimic a chosen input state before allowing it to explore the remainder of the configuration space (Results, Initializing Boltzmann Generators at the Native State). It is unknown at present if BG models can be made trainable and convergent without this initial bias.
Fortunately, it is simple to swap out the AMBER 14 force field we used in this model with other force field parameters, through the openmmforcefields package. In addition, our code is also written in a way so that arbitrary proteins (and other macromolecules) can be inserted into the energy function in the form of a PDB file and appropriate choice of force field, allowing for future benchmarking to be performed. | CONCLUSION
In conclusion, we present a comprehensive toolkit of differentiable methods for molecular science. Our contributions include ad hoc propagation of forces from an arbitrary force field for cases in which rewriting the force field is infeasible, differentiable and parallel rotamer sampling/protein fragment assembly, a guide to writing molecular force fields in a differentiable programming framework, decoupling of energy and entropic estimation, and mathematical results on 3D point cloud alignment and 3D rotation representation that can be applied to problems in molecular geometry. We additionally address potential sources of bias in molecular structure generation and outline the approach to remaining sources of bias, which we did not implement. We demonstrate that our methods are efficiently implementable on CPU and GPU, and mathematically sound. We hope that other researchers will find these methods and the accompanying reference code useful in investigating molecular energy landscapes. | Abstract
Molecular dynamics (MD) is the primary computational method by which modern structural biology explores macromolecule structure and function. Boltzmann generators have been proposed as an alternative to MD, by replacing the integration of molecular systems over time with the training of generative neural networks. This neural network approach to MD enables convergence to thermodynamic equilibrium faster than traditional MD; however, critical gaps in the theory and computational feasibility of Boltzmann generators significantly reduce their usability. Here, we develop a mathematical foundation to overcome these barriers; we demonstrate that the Boltzmann generator approach is sufficiently rapid to replace traditional MD for complex macromolecules, such as proteins in specific applications, and we provide a comprehensive toolkit for the exploration of molecular energy landscapes with neural networks. | Supplementary Material | FUNDING
National Institutes of Health (1R35 GM134864, 1RF1 AG071675, 1R01 AT012053); the National Science Foundation (2210963); the Passan Foundation.
AUTHOR CONTRIBUTIONS
C.M.S: conceptualization, data curation, formal analysis, investigation, methodology, software, validation, visualization, writing—original draft, writing—review and editing. J.W.: investigation, methodology, software, validation, writing—review and editing. N.V.D.: formal analysis, funding acquisition, investigation, methodology, project administration, resources, supervision, validation, visualization, writing—review and editing.
DATA AND MATERIALS AVAILABILITY
All code (Jupyter notebooks, Python scripts) to reproduce the results in this work are included in a BitBucket repository ( https://bitbucket.org/dokhlab/diffrot-manuscript/ ). Non-essential pretrained model weights are available on request to the corresponding author ( [email protected] ). We also include example movies generated by our rotameric Boltzmann generators ( Movies S1 and S2 ).
Author Biographies
Congzhou M Sha is an MD-PhD student at the Penn State College of Medicine, specializing in machine learning for molecular biophysics, structural biology, and drug docking.
Jian Wang is an Assistant Professor of Pharmacology at Penn State College of Medicine, applying machine learning and advanced algorithms for protein allostery, molecular dynamics, structural biology, drug docking, and other areas of computational biology.
Nikolay V Dokholyan is the G. Thomas Passananti Professor of Pharmacology at Penn State College of Medicine, with broad expertise in protein allostery and engineering, molecular dynamics, drug discovery, neurodegenerative disease, and machine learning. | CC BY | no | 2024-01-16 23:49:21 | Brief Bioinform. 2023 Dec 12; 25(1):bbad456 | oa_package/5c/42/PMC10720392.tar.gz |
PMC10721706 | 38097904 | Introduction
Radiotherapy is an important treatment modality offered to approximately 50% of patients with cancer in either the curative or palliative setting [ 1 ]. Radiotherapy-induced nausea and vomiting (RINV) are common and often undertreated symptoms among patients receiving radiotherapy, and the risk varies with the different sites of irradiation and the delivered radiation dose per fraction [ 2 ]. Hence, it is important that clinicians know how to prevent or ameliorate nausea and vomiting in different radiotherapy settings, ensuring that patients complete the treatment successfully without critical dose delays and maintaining optimal quality of life.
This is an update of the Multinational Association of Supportive Care in Cancer (MASCC) and European Society for Medical Oncology (ESMO) antiemetic guideline for radiotherapy update 2015 [ 3 ], part of the 2015 MASCC and ESMO guideline update for the prevention of chemotherapy- and radiotherapy-induced nausea and vomiting and of nausea and vomiting in cancer patients [ 4 ]. The purpose of the update is to review the literature of clinical trials in radiotherapy or concomitant chemoradiotherapy from 2015 to present and based on the literature to provide an update of the evidence-based guideline for the use of antiemetic prophylaxis and treatment in radiotherapy or concomitant chemoradiotherapy. | Literature review and methods
A medical librarian searched Ovid Medline, the Cochrane Central Register of Controlled Trials, Embase Classic, and Embase for references pertaining to RINV and C-RINV without restrictions on the type of study. An initial search was conducted on 8 July 2022 for references published from 1 May 2015 to 8 July 2022, and an updated identical search was performed on 21 July 2023 for references published from 9 July 2022 to 31 Jan 2023. The total review period thus extended from 1 May 2015 to 31 Jan 2023. Two members of the committee (KD and CR) screened all titles and abstracts of the references from the search to identify those requiring full article review. References were excluded if the studies were not focused on nausea and vomiting experienced by patients receiving radiotherapy or concomitant chemoradiotherapy, if they covered pediatric patients or if they were written in a language other than English.
All authors assessed the included literature for full text review. Three web meetings and several email correspondences with discussions and conclusions preceded the final proposal for the RINV guideline update, which was presented and finally approved by the MASCC/ESMO Antiemetic Guidelines Consensus Committee. | Results
The combined searches yielded 343 references (321 from the first search and 22 from the second); 114 duplicates were removed (113 + 1), leaving 229 total references for screening (208 + 21). Of the 229, 169 were excluded during screening leaving 60 references for which the full articles were reviewed. Of the 60, 23 were published as abstracts only, leaving 37 articles retained for full article review. Of the 37, 16 records were excluded after full article review for various reasons (e.g., not RINV relevant, lower methodology, and reviews), leaving 20 publications being finally included for potential incorporation into update recommendations (Fig. 1 ). Three publications were classified as RINV clinical trials or prospective studies [ 5 – 7 ]; one study was a meta-analysis [ 8 ]; 12 publications addressed concomitant chemoradiotherapy including two phase III antiemetic clinical trial [ 9 – 20 ]; and five studies concerned risk factors, practice patterns, methodology in RINV clinical trials, and other [ 4 , 21 – 23 ]. The studies are reviewed and discussed below.
Risk classification
Risk factors
Risk factors for RINV are less investigated compared to those for chemotherapy-induced nausea and vomiting (CINV). Two observational studies by the Italian Group for Antiemetic Research in Radiotherapy (IGAAR) identified that irradiated site (upper abdomen), field size > 400 cm 2 , and concomitant chemotherapy are independent risk factors for development of RINV [ 2 , 24 ].
Since the 2015 update, none of the published data have provided results for patient or treatment related risk factors to modify the risk classification guideline. In a small retrospective study ( n = 62) by Uno et al., risk factors associated with nausea and vomiting in patients with cervical cancer receiving radiotherapy with or without concomitant weekly cisplatin (40 mg/m 2 ) were explored [ 23 ]. Patients treated with cisplatin received granisetron and dexamethasone as antiemetic prophylaxis. In summary, patients aged > 65 years had clinically significantly less nausea and vomiting compared to the younger patients, and the risk difference was regardless of concomitant cisplatin or radiotherapy alone. Only 27% of the younger patients in the concomitant cisplatin group achieved complete response (no vomiting and no use of rescue medication). The study suggests that younger patients treated for cervical cancer with radiotherapy alone should be considered for antiemetic prophylaxis, and as demonstrated in the GAND-emesis study [ 10 ], patients with cervical cancer (regardless of age) treated with radiotherapy and concomitant weekly cisplatin should receive antiemetic prophylaxis with a neurokinin (NK) 1 -receptor antagonist (RA), a 5-hydroxytryptamine(5-HT) 3 -RA, and dexamethasone.
Levels of emetic risk with radiotherapy
The emetic risk of radiotherapy is divided into four risk levels; high, moderate, low, and minimal (Table 1 ). The risk levels are categorized according to the site of irradiation with different emetic risk potentials. The emetic risk of the four levels is based on observations from clinical trials, cohort studies, and expert opinions. The emetic risk levels of the various sites of radiation remain the same as for the previous guideline update. The incidence of emesis (proportion of patients with emesis if no antiemetics are provided) of the four risk levels is poorly described, however well described for total body irradiation and half body irradiation [ 25 , 26 ]. In the 2009 edition of the guideline, and mainly based on expert opinions, percentages for emetic risk were displayed for the four levels. However, acknowledging the high uncertainty of the percentages and the fact that the figures do not influence the guideline recommendations, it was decided for the 2015 update to omit the percentages. Conversely, the American Society of Clinical Oncology (ASCO) antiemetic guideline continues to display the percentages of the four risk categories [ 27 ].
Antiemetic efficacy studies in radiotherapy
Three RINV clinical trials and one meta-analysis in patients receiving single fraction or fractionated radiotherapy are discussed.
Due to the low incidence of RINV for the “low emetic risk” level, no high-quality studies have investigated the use of prophylaxis in this setting. The guideline expert panel estimated that the majority of patients will be subjected to overtreatment if using prophylaxis. Therefore, it was decided to adjust the recommendation for the “low emetic risk” level to recommend rescue antiemetics only.
Efficacy of 5-HT 3 receptor antagonists
As for previous updates of the MASCC/ESMO antiemetic guideline for radiotherapy, no specific 5-HT 3 -RA as antiemetic prophylaxis or rescue treatment is recommended over another.
A meta-analysis published in 2017 assessed 17 randomized controlled trials for efficacy of antiemetic regimens in radiotherapy [ 8 ]. Among patients receiving radiotherapy to the abdomen/pelvis, the study found that prophylaxis with a 5-HT 3 -RAs was significantly more efficacious than placebo and dopamine-RAs in both complete control of vomiting [OR 0.49; 95% confidence interval (CI), 0.33–0.72 and OR 0.17; 95% CI, 0.05–0.58 respectively] and complete control of nausea (OR 0.43; 95% CI, 0.26–0.70 and OR 0.46; 95% CI, 0.24–0.88 respectively). Prophylaxis with 5-HT 3 -RAs was also more efficacious than rescue therapy and dopamine RAs plus dexamethasone. The addition of dexamethasone to 5-HT 3 -RAs compared to 5-HT 3 -RAs alone provides a modest improvement in prophylaxis of RINV. Among patients receiving total body irradiation, 5-HT 3 -RAs were more effective than other agents (placebo, combination of metoclopramide, dexamethasone, and lorazepam). These findings are in accordance with the 2015 guideline update, which remains unchanged in the current update.
Palonosetron as RINV prophylaxis was explored in a pilot study including 75 patients receiving low or moderate emetic risk radiotherapy in a palliative setting (8 Gy single fraction ( n = 44), 20 Gy in 5 fractions, or 30 Gy in 10 fractions) [ 5 ]. Patients received 0.5 mg of palonosetron orally, at least one hour prior to the first fraction of radiotherapy, and every other day until treatment completion. Complete control (no emetic episode, no use of rescue medication, and no more than mild nausea) was the primary efficacy parameter and results were compared with historical data. In the acute phase (day 1 of treatment to day 1 post-treatment), 93.3% and 74.7% reported complete control of vomiting and nausea, respectively. In the delayed phase (days 2–10 post-treatment), 93.2% and 74.0% reported complete control of vomiting and nausea, respectively. These figures were clinically significantly higher compared to a historical cohort using ondansetron. The results need to be confirmed in a larger scale randomized setting to assess the efficacy and tolerability of multiple doses of palonosetron, and the potentially modulating effect of dexamethasone which is often given for the purpose of pain flare prophylaxis among patients undergoing radiotherapy for bone metastases.
Efficacy of NK 1 -receptor antagonists
The NK 1 -RAs as antiemetic prophylaxis in radiotherapy remains largely unexplored.
Two small clinical studies including NK 1 -RAs for the prevention of RINV have been published. One study randomized patients scheduled to receive radiotherapy with at least 30 Gy in total to receive either ondansetron ( n = 20) or ondansetron plus aprepitant ( n = 20) [ 6 ]. However, 80% in the combination group received concomitant chemotherapy, whereas the figure was 60% for the ondansetron only group. There is no information on the antiemetics provided for CINV. The endpoint was symptoms of RINV, unspecified, and results showed significantly higher grade of RINV for the ondansetron group compared to the combination group.
The other study was a phase II single arm study including 52 evaluable patients receiving radiotherapy to the upper abdomen [ 7 ]. Patients receiving fractionated radiotherapy (at least 40 Gy in total) with or without radiosensitizing chemotherapy received oral ondansetron 8 mg BID and aprepitant 125/80/80 mg on Monday, Wednesday, and Friday throughout radiotherapy. Complete response (no vomiting, no use of rescue therapy) during the entire observation period of radiotherapy was achieved by 57.7% (30/52; 95% CI, 43.2–71.3%). Nausea was common with 61.5% reporting significant nausea at any time during the observational period. Compared to historical data, aprepitant and ondansetron as dosed in this trial were not superior to standard ondansetron monotherapy.
From a methodological point of view, it is difficult to draw conclusions from these studies regarding efficacy of addition of aprepitant for the prevention of RINV, and the research question about efficacy of NK 1 -RAs for the prevention of RINV remains unanswered.
Effects of integrative and complementary therapies on RINV
Integrative oncology (i.e., the use of mind and body practices, natural products, and/or lifestyle modifications, etc.) is extensively explored for the reduction of CINV, whereas for RINV only a few studies have attempted to but failed in demonstrating efficacy of e.g. acupuncture [ 28 ]. Thus, Enblom et al. have been looking into the use of integrative oncology techniques and conducted a survey in 200 patients treated with abdominal/pelvic irradiation [ 22 ]. Daily registrations of nausea and practice of complementary self-care strategies were collected. Two thirds of the patients experienced nausea, and 25% practiced self-care for nausea at least once, mostly by modifying eating or drinking habits, for a mean of 15.9 days. Interestingly, patients who practiced integrative self-care experienced less nausea.
Antiemetic efficacy studies in chemoradiotherapy
The findings in an observational study in patients receiving radiotherapy and concomitant low-dose cisplatin, comparing two cohorts using either antiemetic prophylaxis with a 5-HT 3 -RAs and dexamethasone (control) or the same prophylaxis plus aprepitant, demonstrated a trend towards higher control rates for nausea and vomiting in patients receiving the NK 1 -RA [ 9 ].
The GAND-emesis study was a well-designed phase III trial comparing fosaprepitant 150 mg day 1 with placebo both combined with palonosetron and dexamethasone for the prevention of chemoradiotherapy induced nausea and vomiting (C-RINV) in cervical cancer patients ( n = 246) treated with fractionated radiotherapy and concomitant weekly cisplatin 40 mg/m 2 [ 10 ]. The primary endpoint was the “sustained no emesis” rate (SNE; complete free from emesis during five weeks of chemoradiotherapy). The study found a SNE rate of 49% for the placebo group compared with 66% for the fosaprepitant group (subhazard ratio 0.58 [95% CI, 0.39–0.87]; p =0.008). The study proved the superiority of addition of an NK 1 -RA to a 5-HT 3 –RA and dexamethasone in the setting of low-dose cisplatin concomitant to radiotherapy.
Olanzapine (10 mg daily days 1–5) compared to fosaprepitant (150 mg day 1), both in combination with palonosetron and dexamethasone, was explored in a placebo-controlled clinical trial in patients treated for locally advanced head and neck cancer or locally advanced esophageal cancer receiving radiotherapy and concomitant cisplatin > 70 mg/m 2 and 5-fluorouracil, 750 mg/m 2 a day for 4 days [ 11 ]. Efficacy was assessed only for the 120 hours following the first cycle of chemotherapy, and the primary endpoint was complete response overall (120 hours), for which there was no difference between groups (76% and 74% for the olanzapine and fosaprepitant groups, respectively). Due to the study design, the study reports on CINV rather than RINV.
A small single arm study in cervical cancer patients treated with fractionated radiotherapy and concomitant weekly cisplatin 40 mg/m 2 analyzed 65 patients receiving weekly antiemetic prophylaxis with oral olanzapine 5 mg days 1 and 2, intravenous palonosetron 0.25 mg day 1, and intravenous dexamethasone 12 mg day 1 [ 12 ]. The complete response rate was 55%; no vomiting and no nausea were achieved by 63% and 46%, respectively. The time frame for the endpoint is unclear, and the use of NK 1 -RA as rescue is not shown. The use of olanzapine as prophylaxis without an NK 1 -RA for C-RINV in the described setting cannot be recommended.
Two small single arm studies evaluated antiemetic prophylaxis in patients with cervical cancer receiving fractionated radiotherapy and concomitant daily low-dose cisplatin 8 mg/m 2 . The first study ( n = 27) evaluated the efficacy of weekly, day 1 administration of intravenous palonosetron 0.75 mg plus oral aprepitant (125 mg day 1, 80 mg days 2–3). Dexamethasone was only used as rescue [ 13 ]. The primary efficacy endpoint complete response (no emetic episodes and no rescue medication during the complete treatment period) was achieved by 48%. Rescue medication was needed for 52% of the patients. The second study ( n = 26) evaluated the efficacy of weekly, day 1 administration of intravenous palonosetron 0.75 mg and oral dexamethasone (2 mg twice daily) from day 1 to the end of the treatment period [ 14 ]. Complete response, as defined for the previous study, was achieved by all 100% of the patients. In conclusion, these studies highlight the need for adherence to applicable existing guidelines to avoid potential under- or overtreatment, but also the need for further investigation of optimal antiemetic regimens for low-dose daily cisplatin 8 mg/m 2 concomitant to radiotherapy.
Two small prospective studies evaluated the safety and efficacy of antiemetics in patients with malignant glioma receiving standard radiotherapy and concomitant temozolomide (TMZ). The first study ( n = 38) evaluated a weekly dose of intravenous palonosetron 0.25 mg for up to 6 weeks [ 15 ]. C-RINV complete response rates (no vomiting and no use of rescue antiemetics) for 6 weeks ranged from 67 to 79%. The second study ( n = 21) evaluated the addition of aprepitant to palonosetron and dexamethasone [ 16 ]. Complete response rate in the overall period was 76%, and comparing to a historical cohort using a 5-HT 3 receptor antagonist and dexamethasone, the addition of aprepitant significantly improved the complete response rate. Results need to be confirmed in larger scale comparative trials.
Patients ( n = 43) scheduled for fractionated radiotherapy and concomitant cisplatin 100 mg/m 2 (33 mg/m 2 days 1–3) every 3 weeks for two cycles were prospectively assessed for efficacy of an antiemetic prophylaxis regimen consisting of oral aprepitant 125 mg day 1, 80 mg days 2–5; intravenous ondansetron 8 mg day 1; and oral dexamethasone 12 mg day 1, 8 mg on days 2–5 [ 17 ]. The antiemetics were provided for each chemotherapy cycle, and 37 patients completed the two planned cycles. The complete response rate for the overall period was 86%. The study assessed CINV rather than RINV.
A prospective cohort study ( n = 33) assessed the risk of C-RINV during neoadjuvant long-course radiation therapy (low emetic potential) and concurrent 5-fluorouracil-based chemotherapy (low emetic potential) for rectal adenocarcinoma [ 18 ]. No antiemetic prophylaxis was used. The co-primary outcome “vomiting during the entire course of radiotherapy” was observed in 18% of the patients, and one third of the patients used rescue antiemetics during the treatment. Nausea occurred in 64% of the patients during the treatment course, and the onset of nausea was at median 7 days as opposed to 20 days for time to first vomiting episode. The study, subject to a low sample size, underlines the rationale for providing rescue antiemetics for the specific treatment indication, as prophylaxis would result in substantial overtreatment. | Discussion
The systematic literature review provided the basis for an evidence-based update of the recommendations for RINV and C-RINV management. However, the evidence for management of, especially, low and minimal emetogenic radiotherapy remains very limited. For high and moderate emetic level, the evidence for the recommendations is higher (II, B and II, A), and the cornerstone in this setting is still a 5-HT 3 -RA ± dexamethasone. Contributing to the level of evidence, a meta-analysis analyzed 17 randomized studies in RINV [ 8 ]. However, the studies often apply different methodologies (e.g., different primary endpoints, time frames, and antiemetic schedules), and the study heterogeneity complicates the comparison and introduces risk of bias. This inconsistency in study design has been addressed in a systematic review of methodologies, endpoints, and outcome measures in 34 randomized studies of RINV [ 21 ]. Of special notice, only 29% of the randomized studies had a primary endpoint a priori. It is clear that there is a need for scientifically high-quality research in RINV, and the authors call for recommendations for ideal trial design and reporting.
There is a need for further improvement of the control of RINV in highly and moderately emetic risk settings. There is no preferred 5-HT 3 -RA for prophylaxis or rescue. A small study has explored the use of the 5-HT 3 -RA palonosetron and compared to historical data [ 5 ]. There seems to be improved control of RINV when palonosetron is used compared to ondansetron. However, there is a need for larger prospective trials to assess the efficacy, safety, and impact on quality of life of palonosetron in this setting. This is also the case for NK 1 -RAs which are not part of the guideline for the prevention of RINV. One small single arm study explored aprepitant as prophylaxis in the moderately emetic risk category and found that the response rates were comparable to historical cohorts not using an NK 1 -RA [ 7 ]. Further investigation of the NK 1 -RAs in selected treatment settings is warranted.
Nausea is one of the most distressing symptoms in patients receiving chemoradiotherapy including weekly cisplatin [ 20 ]. In this setting, progress has been made, and a new recommendation incorporated in the current guideline is a triple antiemetic regimen including an NK 1 -RA (fosaprepitant/aprepitant), a 5-HT 3 -RA, and dexamethasone [ 10 ]. Based on this regimen, patients will encounter less nausea (15% with no nausea during the 5 weeks of treatment compared to 8% in the placebo/no NK 1 -RA group), and further investigations to specify the group that might benefit from further anti-nausea agents (e.g., olanzapine) are needed. The guideline update recommends specifically the NK 1 -RA fosaprepitant/aprepitant for weekly administration. The NK 1 -RAs rolapitant and netupitant have considerable longer plasma half-lives (approximately 180 and 88 hours, respectively) compared to fosaprepitant/aprepitant (approximately 9–13 hours), and the safety during weekly administration is unclear. A prospective study investigating the safety of NEPA (netupitant and palonosetron) during weekly administration for 5 weeks in patients receiving fractionated radiotherapy and concomitant weekly cisplatin in ongoing (NCT03668639).
RINV and C-RINV continue to have an impact on patients quality of life. A cross-sectional multinational survey among physicians, nurses, and patients showed that the health care professionals overestimated the incidence of C-RINV but underestimated the impact that this had on patients’ daily lives [ 19 ]. Knowledge sharing and guideline dissemination are important in order to provide evidence-based antiemetic treatment to our patients worldwide. | Conclusion
In summary, none of the published data on RINV since 2015 has influenced the current update of the RINV antiemetics recommendations. However, in concomitant chemoradiotherapy, a single study was identified to impact the guidelines update for C-RINV [ 10 ], providing specific recommendations for prophylaxis during weekly cisplatin 40 mg/m 2 concomitant to fractionated radiotherapy. Moreover, the recommendation for the RINV “low emetic risk” category was changed from “prophylaxis or rescue” to “rescue” only, while the drugs of choice remain unchanged. | Purpose
Radiotherapy and chemoradiotherapy-induced nausea and vomiting (RINV and C-RINV) are common and distressing, and there is a need for guidance for clinicians to provide up to date optimal antiemetic prophylaxis and treatment. Through a comprehensive review of the literature concerning RINV and C-RINV, this manuscript aims to update the evidence for antiemetic prophylaxis and rescue therapy and provide a new edition of recommendations for the MASCC/ESMO antiemetic guidelines for RINV and C-RINV.
Methods
A systematic review of the literature including data published from May 1, 2015, to January 31, 2023, was performed. All authors assessed the literature.
Results
The searches yielded 343 references; 37 met criteria for full article review, and 20 were ultimately retained. Only one randomized study in chemoradiation had the impact to provide new recommendations for the antiemetic guideline. Based on expert consensus, it was decided to change the recommendation for the “low emetic risk” category from “prophylaxis or rescue” to “rescue” only, while the drugs of choice remain unchanged.
Conclusion
As for the previous guideline, the serotonin receptor antagonists are still the cornerstone in antiemetic prophylaxis of nausea and vomiting induced by high and moderate emetic risk radiotherapy. The guideline update provides new recommendation for the management of C-RINV for radiotherapy and concomitant weekly cisplatin. To avoid overtreatment, antiemetic prophylaxis is no longer recommended for the “low emetic risk” category.
Keywords
Open access funding provided by University Library of Southern Denmark | Guideline recommendations: update 2023
Recommendation 1; High emetic risk: Patients receiving radiotherapy at a high emetic risk level should receive prophylaxis with a 5-HT 3 -RA plus dexamethasone. Level of Evidence: II; Grade of Recommendation: B (for the addition of dexamethasone: III/C). Recommendation 2; Moderate emetic risk: Patients receiving radiotherapy at a moderate emetic risk level should receive prophylaxis with a 5-HT 3 -RA and optional short course dexamethasone. Level of Evidence: II; Grade of Recommendation: A (for the addition of dexamethasone: II/C). Recommendation 3; Low emetic risk: No routine primary prophylaxis is suggested. Patients receiving radiation therapy of the brain should receive rescue therapy with dexamethasone. Patients receiving radiation therapy to head & neck, thorax, or pelvic sites should receive rescue with dexamethasone, a dopamine-RA, or a 5-HT 3 -RA. Level of Evidence: IV; Grade of recommendation: B. Recommendation 4; Minimal emetic risk: No routine primary prophylaxis is suggested. Patients receiving radiotherapy at a minimal emetic risk level should receive rescue with dexamethasone, a dopamine-RA, or a 5-HT 3 -RA. Level of Evidence: IV; Grade of Recommendation: B. Recommendation 5; Radiotherapy/weekly cisplatin: Patients receiving radiotherapy and concomitant weekly cisplatin should receive prophylaxis before cisplatin administration with a three-drug regimen including a 5-HT 3 -RA, dexamethasone, and fosaprepitant/aprepitant for the prevention of acute nausea and vomiting. Level of Evidence: II; Grade of Recommendation: B. Recommendation 6; Radiotherapy/weekly cisplatin: In patients receiving radiotherapy and concomitant weekly cisplatin treated with a 5-HT 3 -RA, dexamethasone, and fosaprepitant/aprepitant for the prevention of acute nausea and vomiting, dexamethasone on days 2 to 4 is suggested to prevent delayed nausea and vomiting. Level of Evidence: II; Grade of Recommendation: B. Recommendation 7; Concomitant radio-chemotherapy: Patients receiving concomitant radio-chemotherapy should receive antiemetic prophylaxis according to the chemotherapy-related antiemetic guidelines of the corresponding risk category, unless the risk of nausea and vomiting is higher with radiotherapy than with chemotherapy. Level of Evidence: IV; Grade of Recommendation: B. | Acknowledgements
The authors thank Risa Shorr, medical librarian, The Ottawa Hospital, Ottawa, Canada, for the extensive literature search support.
Funding
Open access funding provided by University Library of Southern Denmark Meeting and production costs have been covered by MASCC and ESMO from central funds.
Declarations
Conflict of interest
Christina Ruhlmann reports personal fees (speaker) from Bristol Myers Squibb (BMS), Helsinn Healthcare SA, and Pharmanovia, and funding for a clinical trial from Helsinn Healthcare SA and the Novo Nordic Foundation. Karin Jordan reports personal fees as an invited speaker from Amgen, art tempi, Helsinn, Hexal, med update GmbH, MSD, Mundipharma, onkowissen, Riemser, Roche, Shire (Takeda), and Vifor; personal fees for advisory board membership from Amgen, AstraZeneca, BD Solutions, Hexal, Karyopharm and Voluntis; and personal fees as author for UpToDate. Franziska Jahn reports honorarium (speaker) from Amgen. Ernesto Maranzano has none to declare. Alex Molasiotis reports honoraria and research grant from Helsinn. Kristopher Dennis has none to declare. | CC BY | no | 2024-01-16 23:34:58 | Support Care Cancer. 2024 Dec 15; 32(1):26 | oa_package/e3/fd/PMC10721706.tar.gz |
PMC10722787 | 38098072 | Introduction
Annually, approximately 10% of patients receiving chronic anticoagulation therapy undergo diagnostic or therapeutic procedures that are associated with bleeding risks and require therapy interruption [ 1 ]. In particular, patients scheduled for major surgery have a high risk of bleeding. With the growing use of direct oral anticoagulants (DOACs), physicians must optimise periprocedural-DOAC management to balance the risk of bleeding with that of thromboembolic events. While clinical guidelines recommend that surgeries with high bleeding risks (e.g., major surgeries) should utilise temporary DOAC interruption, there are many less invasive procedures that carry a relatively low bleeding risk and do not necessitate interruption [ 1 , 2 ]. However, real-world data on the safety and periprocedural management of DOAC therapy in the setting of major surgeries with a high risk of bleeding are limited.
Previous studies have reviewed and assessed the pharmacological properties [ 3 ] and periprocedural management of DOACs, including rivaroxaban, dabigatran, and apixaban [ 4 ]. In the prospective, noninterventional Dresden registry, patients who underwent major procedures were significantly more likely to experience bleeding and major cardiovascular (CV) events as well as CV death when compared with patients who underwent minimal and minor procedures [ 5 ]. Additionally, in the prospective, nonrandomised PAUSE trial, rates of major bleeding were higher in patients undergoing a high-bleeding-risk procedure treated with apixaban (2.96%) or rivaroxaban (2.95%) compared with dabigatran-treated patients (0.88%) [ 6 ]. Notably, a subgroup of dabigatran-treated patients with creatinine clearance (CrCL) < 50 mL/min undergoing a high-bleeding-risk procedure had slightly longer preprocedural DOAC interruption compared with patients treated with apixaban or rivaroxaban undergoing a high-bleeding-risk procedure [ 6 ].
Edoxaban is a DOAC approved for the prevention of stroke and systemic embolic events (SEEs) in patients with nonvalvular atrial fibrillation (AF) and for the prevention and treatment of venous thromboembolism (VTE) [ 7 – 10 ]. Real-world data regarding periprocedural-edoxaban management are limited, especially in patients undergoing major surgeries.
The EMIT-AF/VTE programme (Edoxaban Management in Diagnostic and Therapeutic Procedures) was designed to investigate bleeding and thromboembolic events prospectively in patients with AF or VTE treated with edoxaban and undergoing procedures of varying risk levels [ 11 , 12 ]. Primary analysis of the EMIT-AF/VTE data showed low rates of periprocedural major bleeding, clinically relevant nonmajor bleeding (CRNMB), acute thromboembolic events, and acute coronary syndrome in edoxaban-treated patients who underwent a wide range of diagnostic and therapeutic procedures [ 13 ]. The objective of this subanalysis is to compare the periprocedural management of edoxaban and clinical outcomes in patients who underwent major vs. nonmajor surgeries. | Methods
Study design
The design and overall results of the Global EMIT-AF/VTE programme (NCT02950168, NCT02951039) are published [ 11 , 13 ]. EMIT-AF/VTE is a multicentre, prospective, observational programme conducted in Europe and Asia in accordance with the Declaration of Helsinki and with local Institutional Review Board approvals. Written informed consent was obtained from all participants prior to enrolment. The periprocedural management of edoxaban therapy was at the discretion of the investigator, including any decision regarding treatment interruption and the timing/duration of any interruption.
Patient recruitment
EMIT-AF/VTE programme enrolment commenced in December 2016 and completed in August 2020 for the countries reported here. Patients were recruited from Belgium, Germany, Italy, the Netherlands, Portugal, Spain, the UK, South Korea, and Taiwan. Eligible patients were ≥ 18 years of age, had AF or VTE, were treated with edoxaban according to the local labels, were not enrolled in any interventional study concurrently, and underwent any type of diagnostic or therapeutic procedure [ 11 , 13 ]. Surgeries, and therefore patients, were excluded from the analysis if there were multiple surgeries on the same day, missing surgery dates, and/or surgery date was more than 14 days after last edoxaban dose. | Results
Patient demographics and baseline characteristics
Overall, 1830 patients who underwent 2436 procedures were enrolled in the Global EMIT-AF/VTE programme, of which 250 (35.2%) patients underwent major surgeries and 461 (64.8%) patients underwent nonmajor surgeries; 788 surgeries were analysed, as some patients underwent more than one surgery (Fig. 1 ). A total of 276 major surgeries were performed, with the most common being orthopaedic (27.9%), general (25.7%), or cardiothoracic/vascular (18.5%; Fig. 2 ). Patients who underwent major surgeries were significantly younger at enrolment (mean ± SD, 73.1 ± 8.8 years) than patients who underwent nonmajor surgeries (mean ± SD, 74.8 ± 9.7 years; P = 0.02; Table 1 ). A higher percentage of patients who underwent major surgeries were in the 65 to < 75 age group compared with those who underwent nonmajor surgeries. Patients who underwent major vs. nonmajor surgeries had similar baseline CHA 2 DS 2 -VASc (mean ± SD, 3.5 ± 1.5 vs. 3.6 ± 1.5) and HAS-BLED scores (mean ± SD, 2.0 ± 1.0 vs. 1.8 ± 1.1). The percentage of patients with impaired renal function (CrCL ≤ 50 mL/min) at baseline was 20.8% in the major surgery group and 24.9% in the nonmajor surgery group. Baseline CrCL was numerically higher for patients who underwent major surgeries (mean ± SD, 70.7 ± 25.9 mL/min) when compared with those who underwent nonmajor surgeries (mean ± SD, 66.3 ± 27.8 mL/min; P = 0.052). The percentages of patients undergoing major surgeries receiving 60 and 30 mg edoxaban were similar to the percentages of patients undergoing nonmajor surgeries (64.8% and 34.4% vs. 65.5% and 34.3%, respectively; Table 1 ).
Periprocedural-edoxaban interruption
Periprocedural-edoxaban interruption was assessed in 276 major and 512 nonmajor surgeries. The number of major vs. nonmajor surgeries with pre- and postprocedural interruption was 160 (58.4%) vs. 114 (22.3%), respectively ( P < 0.0001; Table 2 ). The number of major vs. nonmajor surgeries with preprocedural interruption only was 47 (17.2%) vs. 222 (43.4%), respectively; 13 (4.7%) major surgeries and 16 (3.1%) nonmajor surgeries had postprocedural interruption only (Table 2 ). Of the major surgeries with edoxaban interruption, 37 (13.5%) had interruption on only day 0 (surgery day), while 26 (9.5%) had interruption on only days 0 and 1; no surgeries had interruption on only day 1.
When including surgeries without interruption, the median duration of edoxaban interruption in major vs. nonmajor surgeries with pre- and postprocedural, preprocedural, or postprocedural interruption was 4 vs. 1 days, 2 vs. 1 days, or 2 vs. 0 days, respectively ( P < 0.0001 for all; Table 3 ). Table 4 summarises preprocedural edoxaban interruption, excluding surgeries without interruption. The number of major and nonmajor surgeries without any interruption were 54 (19.7%) and 160 (31.3%), respectively. A protracted period of time before edoxaban resumption followed major and nonmajor surgeries with high bleeding risk (Fig. 3 ).
Clinical outcomes
Major bleeding rates (number of events per 100 surgeries) were similar for major vs. nonmajor surgeries (0.4 vs. 0.6, respectively; Table 5 ). Rates of all bleeding in major vs. nonmajor surgeries were 4.3 vs. 3.3, and rates of CRNMB were 1.4 vs. 0.2 in major and nonmajor surgeries, respectively (Table 5 ). Overall, 2 thromboembolic events (1 stroke in nonmajor surgery group and 1 SEE in major surgery group) and 2 deaths (1 sepsis and 1 malignancy in major surgery group) were reported.
Periprocedural-edoxaban interruption and clinical outcomes stratified by renal function
Patients with CrCL ≤ 50 mL/min and with CrCL > 50 mL/min had similar rates of pre- and postprocedural interruption, preprocedural-only interruption, and postprocedural-only edoxaban interruption (Table 6 ). Of the major surgeries with edoxaban interruption, treatment resumption was slower in patients with CrCL ≤ 50 mL/min when compared with patients with CrCL > 50 mL/min. In major surgeries with preprocedural interruption, the number of patients with edoxaban resumption ≥ 5 days after the surgery day was 28 (59.6%) in the CrCL ≤ 50 mL/min group and 67 (47.5%) in the CrCL > 50 mL/min group. Overall, the timing of edoxaban resumption did not differ by renal function category.
Of patients who underwent major surgeries, the rate of all bleeding events was numerically lower in patients with CrCL ≤ 50 mL/min than in patients with CrCL > 50 mL/min (1.6 vs. 5.3); both deaths reported in the study occurred in patients with CrCL > 50 mL/min (Table 7 ). Of patients who underwent nonmajor surgeries, the rate of all bleeding events was numerically higher in patients with CrCL ≤ 50 mL/min than in patients with CrCL > 50 mL/min (5.2 vs. 2.0). | Discussion
This subanalysis of the Global EMIT-AF/VTE programme assessed the periprocedural management of edoxaban and the occurrence of bleeding and thromboembolic events in edoxaban-treated patients who underwent major or nonmajor surgeries. To the authors’ knowledge, this analysis is the first to report treatment interruption and clinical events in patients treated with edoxaban undergoing major or nonmajor surgeries. Baseline CHA 2 DS 2 -VASc score, HAS-BLED score, and CrCL were similar between patients who underwent major and nonmajor surgeries. While major surgeries had a longer period of edoxaban interruption compared to nonmajor surgeries, low rates of all bleeding, major bleeding, CRNMB, and thromboembolic events were observed in both groups. These results suggest that longer periprocedural edoxaban interruption for patients undergoing major surgeries may help mitigate the bleeding and thromboembolic risk in this group.
The periprocedural management of DOAC treatment focuses on reducing the risk of bleeding without increasing the risk of thromboembolic events. Conversely, prolonged interruption of DOAC therapy may increase the risk of thromboembolism, most importantly ischaemic stroke, whereas an interruption that is too brief may increase the risk of bleeding. In our study, edoxaban therapy was not interrupted for 54 (19.6%) of the major surgeries, which may have been due in part to clinician interpretation of minor hemorrhagic risk and in part to the fact that 3 of these were unplanned (emergency/urgent) surgeries. Major surgeries carry a higher risk of bleeding, and most recommendations suggest longer interruption times compared with low- or minor-bleeding-risk surgeries [ 17 ]. In line with these recommendations, the current study of routine clinical practice found major surgeries had longer edoxaban interruption times when compared with nonmajor surgeries. Notably, there were only 24.5% of major surgeries without any preprocedural interruption; 23.0% of major surgeries had one day or less of postprocedural interruption (interruption on day 0 and/or day 1). This agrees with findings from a subanalysis of the prospective Dresden registry, which reported data on 863 surgical or interventional procedures in DOAC-treated patients receiving predominantly rivaroxaban [ 5 ]. Of the procedures reported, 87 (10.1%) were major surgical procedures, 641 (74.3%) were minor procedures, and 135 (15.6%) were minimal procedures [ 5 ]. Despite having a smaller percentage of major procedures compared with our study, the Dresden study was similar to our analysis in that DOAC use was not interrupted in 22% of patients undergoing surgeries, and the majority of procedures were performed with DOAC interruption [ 5 ].
For patients participating in the Dresden registry, bleeding and cardiovascular event rates were low, similar to this subanalysis [ 5 ]. Notably, the present study stratified patients undergoing major and nonmajor surgeries by pre- and postprocedural interruption, whereas the Dresden study analyzed periprocedural-DOAC interruption in major, minor, and minimal procedures [ 5 ]. The Dresden study also did not stratify patients by time of interruption relative to the procedure, nor did it specify whether DOAC use on the day of a procedure was considered uninterrupted [ 5 ]. Overall, bleeding and cardiovascular events were < 6% for all procedures; patients who underwent minimal and minor procedures vs. those who underwent major procedures had significantly higher rates of any bleeding (2.2% and 4.5% vs. 16.1%), major bleeding (0% and 0.5% vs. 8.0%), CRNMB (1.5% and 3.1% vs. 8.0%), major CV events (0% and 0.8% vs. 4.6%), or CV death (0% and 0.2% vs. 2.3%) at day 30 ± 5 after the procedure [ 5 ].
Additionally, results from this study are comparable with those from the PAUSE study. However, while the PAUSE study used a predefined interruption protocol for surgeries with different bleeding risks, the EMIT-AF/VTE study left the periprocedural-edoxaban management to the attending clinician without any influence of a study protocol [ 11 , 16 ]. Therefore, the results of this analysis with low bleeding and ischaemic events in major surgeries suggest that clinicians made the right decision to confine the risk of bleeding in high-risk major surgeries while not increasing the risk of preoperative ischaemic events. Furthermore, compared with the PAUSE trial, this study used a stronger definition for major surgeries that combined the criteria utilised in both the PAUSE and Dresden studies; this improved definition may reduce the risk of selection bias within our study [ 5 , 11 , 16 ].
In patients with AF, renal dysfunction is a risk factor for both thromboembolic and bleeding events [ 18 , 19 ]. Current guidelines recommend a reduced dose of DOACs in patients with renal impairment (CrCL ≤ 50 mL/min) [ 20 ]. In the current study, treatment resumption was protracted in patients with CrCL ≤ 50 mL/min vs. CrCL > 50 mL/min; approximately 70% of patients with CrCL ≤ 50 mL/min resumed edoxaban by day 30 vs. 90% of patients with CrCL > 50 mL/min. With regards to clinical event rates, in a subanalysis of the ENGAGE AF-TIMI 48 trial, patients on a high-dose edoxaban regimen with moderately reduced renal function (CrCL 30–50 mL/min) had numerically lower rates of major bleeding when compared with patients with CrCL > 50 mL/min [ 21 ]. In the current study, the rate of all bleeding events for patients with CrCL ≤ 50 mL/min vs. those with CrCL > 50 mL/min undergoing major surgeries was numerically lower, while the rate of all bleeding events for patients with CrCL ≤ 50 mL/min vs. those with CrCL > 50 mL/min undergoing nonmajor surgeries was numerically higher. This may be due, in part, to a longer periprocedural interruption time in patients with CrCL ≤ 50 mL/min vs. those with CrCL > 50 mL/min undergoing major surgeries, whereas in the nonmajor surgery group, there was no difference in interruption duration between renal function subgroups. These results support the safety of the clinician-driven, edoxaban-management regimen in vulnerable populations, such as patients with renal impairment. However, bleeding event rates (number of events per 100 surgeries) were low overall, regardless of renal function or surgery group (< 6 for all outcomes).
Limitations of this subanalysis include the lack of a DOAC-comparator arm and the lack of formal statistical comparisons between groups for the periprocedural management of edoxaban and clinical outcomes. Additionally, edoxaban management was not standardised, as it was at the discretion of the investigator; however, this enabled patient-individualised treatment. EMIT-AF/VTE is a global programme with data from 326 centres comprising a large number of patients undergoing a wide range of major or nonmajor surgeries in routine clinical practice, including a high percentage of patients (20.8%) with CrCL ≤ 50 mL/min, representing a strength of this analysis. As a large observational study, these data complement randomised controlled trial data, reflecting edoxaban management in current clinical practice without the guidance of a predefined study protocol. | Conclusion
In this subanalysis of the EMIT-AF/VTE programme, patients’ edoxaban regimens were interrupted more frequently and for longer periods of time for major vs. nonmajor surgeries. Periprocedural management of edoxaban driven by decisions from the attending clinicians was associated with low rates of all bleeding, major bleeding, and CRNMB and thromboembolic events in patients undergoing major or nonmajor surgeries. | Background
Optimising periprocedural management of direct oral anticoagulation in patients with atrial fibrillation on chronic treatment undergoing major surgeries is an important aspect of balancing the risk of surgery-related bleeding with the risk of thromboembolic events, which may vary by surgery type.
Methods
This subanalysis of the prospective EMIT-AF/VTE programme assessed periprocedural-edoxaban management, according to physicians’ decisions, and bleeding and thromboembolic event rates in patients who underwent major vs. nonmajor surgeries. Edoxaban interruption and clinical outcomes were compared between major vs. nonmajor surgeries and between renal function subgroups (creatinine clearance [CrCL] ≤ 50 mL/min vs. > 50 mL/min).
Results
We included 276 major and 512 nonmajor surgeries. The median pre- and postprocedural duration of edoxaban interruption in major vs. nonmajor surgeries was 4 vs. 1 days, whereas median duration of interruption for those with preprocedural-only and postprocedural-only interruption was 2 vs. 1 days and 2 vs. 0 days, respectively ( P < 0.0001). Rates of all bleeding and clinically relevant nonmajor bleeding were numerically higher in major vs. nonmajor surgeries. Event rates (number of events per 100 surgeries) were low overall (< 6 events per 100 surgeries), independent of renal function subgroups.
Conclusion
In this subanalysis of the EMIT-AF/VTE programme, periprocedural-edoxaban interruption was significantly longer in patients undergoing major vs. nonmajor surgery. This clinician-driven approach was associated with low rates of bleeding and thromboembolic events following both major and nonmajor surgeries.
Trial registration
NCT02950168, registered October 31, 2016; NCT02951039, registered November 1, 2016.
Keywords
Open Access funding enabled and organized by Projekt DEAL. | Observations and outcomes
Observations, including edoxaban interruption and clinical event data, were recorded from 5 days before the procedure until 30 days after the procedure. To enhance data capture, patients received a memory aid booklet at study enrolment, which was reviewed at the end of the study. Edoxaban therapy was considered as uninterrupted if treatment was administered on each day of the observation period, including at any time on the day of the procedure. Any interruption of edoxaban treatment was recorded as the number of days without administration of edoxaban (preprocedural days [5 days before procedure and at procedure day] and/or postprocedural days [within 30 days after procedure]).
The primary safety outcome was the incidence of major bleeding, as defined by the International Society of Thrombosis and Haemostasis (ISTH) [ 14 , 15 ]. Secondary outcomes included evaluation of periprocedural-edoxaban interruption, incidence of acute coronary syndrome (ACS), CRNMB, minor bleeding, all bleeding, all-cause death, CV death, and acute thromboembolic events (stroke, transient ischaemic attack, SEE). All major bleeding, CRNMB, ACS, and acute thromboembolic events were reviewed and unanimously adjudicated by the Steering Committee. Within each group, periprocedural-edoxaban interruption and clinical events were also analysed by renal function category (CrCL ≤ 50 vs. > 50 mL/min). The following parameters were documented at baseline: concomitant medications; HAS-BLED (Hypertension, Abnormal renal/liver function, Stroke, Bleeding history or predisposition, Labile international normalised ratio, Elderly, Drugs/alcohol concomitantly) score; CHA 2 DS 2 -VASc (Congestive heart failure, Hypertension, Age ≥ 75 [doubled], Diabetes, Stroke [doubled], Vascular disease, Age 65–74 years, and Sex category [female]) score; renal function; details of edoxaban treatment; diagnostic/therapeutic procedures; and medical history.
Classification of surgeries
Major surgeries were classified by a combination of criteria used in the Dresden registry and PAUSE study: relevant tissue trauma and high bleeding risk; utilisation of general or neuraxial anaesthesia; major intracranial, neuraxial, thoracic, cardiac, vascular, abdominopelvic, or orthopaedic surgery; or other major cancer or reconstructive surgery [ 5 , 16 ]. All major surgeries were considered high risk based on European Heart Rhythm Association (EHRA) bleeding risk levels, and nonmajor surgeries were assigned risk levels per EHRA periprocedural bleeding risk criteria.
Statistical analysis
Baseline data are presented as frequencies and/or as summary statistics. P -values for baseline characteristics were calculated using Fisher’s exact test. Pre- and postprocedural edoxaban interruption and clinical outcomes were compared between major vs. nonmajor surgery groups and between renal function subgroups (CrCL ≤ 50 mL/min vs. CrCL > 50 mL/min, calculated using the Cockcroft-Gault equation); data are presented as summary statistics (n, mean, standard deviation [SD]) for numerical parameters and absolute and relative frequencies for duration of interruption between major vs. nonmajor surgeries. Clinical event rates are presented as number of events per 100 surgeries. Clinical outcomes analyses were descriptive and exploratory; no statistical comparisons were made between subgroups. Edoxaban interruption duration data included all patients, both with and without interruption, to avoid selection bias. P -values for duration of edoxaban interruption were calculated using the Wilcoxon test. | Abbreviations
Acute coronary syndrome
Atrial fibrillation
Congestive heart failure, Hypertension, Age ≥75 (doubled), Diabetes, Stroke (doubled), Vascular disease, Age 65–74 years, and Sex category (female)
Creatinine clearance
Clinically relevant nonmajor bleeding
Cardiovascular
Direct oral anticoagulants
European Heart Rhythm Association
Hypertension, Abnormal renal/liver function, Stroke, Bleeding history or predisposition, Labile international normalised ratio, Elderly, Drugs/alcohol concomitantly
International Society of Thrombosis and Haemostasis
Systemic embolic events
Standard deviation
Venous thromboembolism
Acknowledgements
Writing and editorial support were provided by Kimberly Dent-Ferguson, MBS, MPH, of AlphaBioCom, a Red Nucleus company (King of Prussia, PA, USA), and funded by Daiichi Sankyo.
Authors’ contributions
CvH: Conceptualisation, Investigation, Writing – review & editing; MU: Conceptualisation, Investigation, Writing – review & editing; PC: Conceptualisation, Writing – review & editing; AS: Conceptualisation, Writing – review & editing; MS: Conceptualisation, Writing – review & editing; TV: Conceptualisation, Writing – review & editing; AB: Conceptualisation, Investigation, Writing – review & editing; SK: Data curation, Formal analysis, Validation, Writing – review & editing; JJ: Data curation, Methodology, Formal analysis, Validation, Writing – review & editing; CC: Conceptualisation, Investigation, Writing – review & editing.
Funding
This study was funded by Daiichi Sankyo, Inc. Open Access funding enabled and organized by Projekt DEAL.
Availability of data and materials
The data underlying this article cannot be shared publicly, as the Global EMIT-AF/VTE study is currently ongoing.
Declarations
Ethics approval and consent to participate
The Global EMIT-AF/VTE programme (NCT02950168, NCT02951039) is a multicentre, prospective, observational programme conducted in accordance with the Declaration of Helsinki and with local Institutional Review Board approvals. All participants provided written informed consent prior to enrolment.
Competing interests
CvH reports grants and personal fees from Daiichi Sankyo Europe and Daiichi Sankyo Germany; personal fees from Pfizer/Bristol Myers Squibb, CSL Behring, Mitsubishi Pharma, Novo Nordisk Pharma, HICC GbR, Sobi Pharma, and Shionogi Pharma; receipt of a mandate from the German Society of Anaesthesiology and lntensive Care Medicine to write the German Guideline on Preoperative Anaemia; participation in the writing group of the Guideline on Reversal of DOAC-Induced Life-Threatening Bleeding as well as participation in the writing group of the Regional Anaesthesia in Patients on Antithrombotic Drugs (published 2022) both on behalf of the European Society of Anaesthesiology and Intensive Care Medicine; and receipt of a mandate to take part in the writing group of the guidelines on the Diagnostics and Treatment of Peripartum Haemorrhage of the Deutsche Gesellschaft für Gynäkologie und Geburtshilfe. MU, AB, JJ, and CC are employees of Daiichi Sankyo. PC reports grants and personal fees from Daiichi Sankyo Europe; personal fees from Daiichi Sankyo Italy, Boehringer Ingelheim, Bayer AG, and Pfizer/Bristol Myers Squibb; and nonfinancial support from the European Society of Cardiology and the Italian Cardiology Association. AS reports nothing to disclose. MS reports grants and personal fees from Daiichi Sankyo Europe and personal fees from Daiichi Sankyo UK. TV reports grants and personal fees from Daiichi Sankyo Europe; and personal fees from Daiichi Sankyo Be, Bayer, LEO Pharma, and Boehringer Ingelheim. SK reports consulting fees from Daiichi Sankyo. | CC BY | no | 2024-01-16 23:36:45 | Thromb J. 2023 Dec 14; 21:124 | oa_package/72/e9/PMC10722787.tar.gz |
PMC10724087 | 38102530 | Introduction
Lymphedema is a chronic debilitating disease marked by deficits in lymph drainage and accumulation of protein-rich fluid, leading to limb edema that can progress to cellulitis and fibrosis over time. People with lymphedema are susceptible to extremity impairment, recurrent soft tissue inflammation and infections, lymphorrhea, body disfigurement, and psychological and social issues [ 1 , 2 ]. Breast cancer treatments can result in breast cancer-related lymphedema (BCRL), with reported frequencies ranging from 6 to 65% [ 3 – 5 ].
Treatments for lymphedema focus on symptom management and improved patient-reported outcomes (PROs). Traditional interventions include manual lymphatic drainage (MLD), compression therapy and self-care (e.g., skin hygiene, limb elevation, exercise, compression garments) [ 6 ]. In very severe cases, lymphatic exchange may be performed [ 7 ]. More recently, pneumatic compression devices (PCDs) have become an additional treatment option that clinicians can offer patients for the treatment of lymphedema. Clinical studies have demonstrated that regular use of PCDs, as an adjunct to standard self-care measures, is associated with significant patient-reported improvements in overall symptoms, decreased limb girth, decreased limb volume, increased elasticity of tissues, and fewer episodes of infection [ 8 – 12 ].
Adherence to prescribed at-home self-care is critical to the successful treatment of lymphedema [ 11 , 13 ]. Research shows that compliance to some risk management behaviors diminishes over time while adherence to other behaviors remains high [ 14 , 15 ]. Text messaging is a convenient method to send patients reminders to use home therapies, take prescribed medications, or follow postoperative instructions and can been used for a variety of disease states [ 16 – 18 ]. The primary purpose of this study was to determine whether cell phone text reminders impacted the rate of compliance with PCD therapy. Secondary outcomes were to examine the changes in arm girth, quality of life, and symptom severity in patients using PCD for BCRL. | Materials and methods
Study design and population
The study was an on-label, prospective, randomized, 2-group feasibility study conducted at 2 centers. Ethics approval was received through Western IRB (Puyallup, WA) and University of Louisville Human Subjects Protection Program Office (Louisville, KY). The study is registered at www.clinicaltrials.gov (unique identifier: NCT04432727). Before participating in the study, all participants signed the IRB-approved written informed consent.
Participants were females of 18 years or older with unilateral BCRL who provided informed consent, agreed to comply with the study requirements, and were able to receive text messages from the study sponsor. Participants were excluded from participation if they had used a PCD in the previous 3 months, had undergone phase 1 complete decompression therapy (CDT) within 1 month or were planning to undergo during the study period, were currently undergoing curative cancer therapy, or were unable to be fitted for PCD garments. Additional medical conditions excluding participants were heart failure, acute venous disease, active skin or limb infection or inflammatory disease, pregnancy or planning to become pregnant, and any condition where increased lymphatic or venous return was undesirable. Lastly, participants were excluded if there was a known inability to receive cell phone connection where the PCD therapy was to be administered.
After signing the informed consent and inclusion/exclusion criteria were confirmed, participants were randomized by an electronic data capture database to either PCD therapy with connectivity (test) or PCD therapy without connectivity (control). The randomization scheme was generated by a statistician using a permutated block design with block size balanced within each block to maintain a 1:1 ratio between treatment groups.
PCD therapy was conducted using the Flexitouch® (FT) Plus advanced PCD (Tactile Medical, Minneapolis, MN, USA) in daily 60-min U1 unilateral sessions at normal pressure. The devices used for the study were identical to the commercially available devices except for a cellular communication module that was added to the controller unit, which transmitted usage data to a cloud-based database. This usage information was used to send automated text message reminders to participants in the test group if they had not used the device for 2 consecutive days. Participants in the control group did not receive text message reminders.
Assessments
Participants completed study visits at screening, baseline, device training (within 21 days after the baseline visit), and at 30-day and 60-day follow-ups. Device training was conducted by qualified Tactile Medical personnel.
The primary endpoint was to compare the rate of treatment compliance in patients in the test group with those in the control group. Complete compliance was defined as an average of 5–7 treatments per week, partial compliance as 1–4 treatments per week, and noncompliance as <1 treatment per week.
Exploratory endpoints included assessments of arm girth, quality of life (QOL), symptom questionnaires, and adverse events. Participants were provided with a tape measure to make arm girth measurements on the anterior forearm 6 cm below the midline of the antecubital fossa on the affected arm.
Disease-specific quality of life was measured using the Lymphedema Quality of Life Tool (LYMQOL ARM). The LYMQOL ARM is a 21-item questionnaire designed and validated in patients with chronic edema. The survey includes 4 domains: function, appearance, symptoms, and mood, and an overall QOL score. Each domain item is scored on a 4-point scale with higher scores indicating worse QOL. The overall QOL score is scored from 0 (poor) to 10 (excellent) [ 19 ].
Symptom severity was assessed using the Lymphedema Symptom Intensity and Distress Survey-Arm (LSIDS-A). The LSIDS-A is a 30-item validated assessment tool designed for measuring arm lymphedema and its associated symptoms in patients with BCRL [ 20 ]. The questionnaire reporting period is the previous 7 days and symptoms are reported as “yes” or “no,” symptoms with responses of “yes” are then scored from 1 to 5 with higher scores indicating more severe symptoms. The scores are calculated into an overall score and 7 domain scores: soft tissue sensation, neurological sensation, functional, behavioral, resource, sexual function, and activity. The overall and domain scores are the means of the individual items included in the score.
General quality of life was assessed using the validated RAND Short Form-36 (SF-36). The SF-36 evaluates 8 domains: general health, physical functioning, physical role limitations, emotional role limitations, energy/fatigue, emotional well-being, social functioning, and bodily pain [ 21 ]. Scores were normalized, so that each score has a range from 0 (maximum disability) to 100 (no disability). Higher scores indicate a more favorable health state.
Adverse events were categorized as serious or nonserious with severity rankings of mild, moderate, or severe. The relationship of the event to the device was rated as not related, possibly, probably, or definitely related.
Statistical analysis
No power calculations were used to derive a sample size given this study sought to identify which health outcomes should be used for a larger randomized controlled trial in the future. The desired target sample size for this feasibility study was set at 10 analyzable data sets for each group. Enrollment of up to 30 participants was planned to achieve 60-day follow-up data on at least 20 participants.
The analysis population includes all enrolled participants. Participants were assessed by treatment group for the primary endpoint. Ad hoc regression analyses were performed on the pooled cohort by compliance status (complete, partial, noncompliance), irrespective of the treatment assignment.
Descriptive statistics were calculated for all continuous variables. Frequencies, percentages, and confidence intervals were calculated for categorical data. Any data found to be randomly missing within the survey measures were handled as specified by the survey developers. Nonrandom missing visit data were not imputed.
Mixed effects regression models were fit to the data to account for the repeated (non-independent) nature of the measurements for each participant over multiple timepoints. An autocorrelation structure (AR1) was used to account for single-order correlation between timepoints. For each participant, the measures post baseline are likely dependent upon the prior timepoint. The post baseline means and differences presented in the tables are model estimated means and differences, and not the raw observed means and differences for each time point. A Tukey method was used to adjust the confidence intervals and p values for multiple comparisons. P values <0.05 were considered statistically significant.
Analyses were performed using R version 4.1.0 or higher (The R Foundation for Statistical Computing. https://www.R-Project.org ). | Results
Sixty-one participants were screened for the study with 29 ultimately enrolled and randomized between August 2020 and September 2021. Four participants withdrew, resulting in 60-day follow-up available for 25 participants (14 test, 11 control).
Demographic and other baseline data are presented Table 1 . Participants were primarily White women (82.8%) in their 50’s with BMI >30. The test group had a shorter period since their lymphedema diagnosis than the control group (1.7 vs 2.5 years).
Compliance with treatment
The primary endpoint of treatment compliance is shown in Table 2 . Two participants, 1 in each group, did not start treatment but otherwise continued in the study; hence, only 23 participants have device compliance data. One test group participant did not receive an expected text reminder due to technical issues. There was no difference between the test and control groups for device compliance. In addition, no statistically significant differences in compliance were seen based on demographic or baseline characteristics. Overall, 52.2% (12/23) of participants were completely compliant, an additional 43.5% (10/23) were partially compliant, and 1 patient (4.3%) was noncompliant. Since there was no difference in compliance between the treatment groups, outcomes were analyzed on the complete cohort.
Outcomes for pooled population
Changes in weight, BMI, and arm girth are shown in Table 3 . By regression analysis, the reductions in weight and BMI at the 30-day visit were statistically significant ( p <0.05). The change in arm girth was not significant at the 30-day visit, but was significantly reduced at the 60-day visit ( p =0.034).
The overall and domain scores of the LYMQOL ARM questionnaire are provided in Table 4 . The overall quality of life and functional domain scores were significantly improved at the 60-day follow-up ( p =0.004 and p =0.027, respectively). The symptom domain score was significantly improved at 30-day follow-up ( p =0.006). The mood domain was also significantly improved at the 30-day follow-up ( p =0.009).
The LSIDS-A results (Table 5 ) indicate that overall score improved significantly at both timepoints, as did the neurological sensation domain score. The soft tissue sensation domain and behavioral domain scores were significantly improved at the 60-day follow-up ( p =0.010, and p =0.044, respectively).
The SF-36 scores results are presented in Table 6 . Although improvements from baseline were observed in each domain, only the change in the pain domain was statistically significant at the 30-day ( p =0.042) and 60-day ( p =0.009) follow-ups.
Regression analysis by compliance status
An ad hoc exploratory analysis was performed on the pooled population with outcomes evaluated by compliance status (complete vs partial compliance).
Although there were limited numbers of Black and Hispanic participants, we were interested to see if there was any difference in compliance compared with the White/non-Hispanic participants. Participants who identified as Black or Hispanic ( n =5, 21.7%) were marginally more fully compliant than the participants who identified as White and non-Hispanic ( n =18, 78.2%) (60.0 vs 50.0%) at the 60-day follow-up.
By regression analysis, only the fully compliant group demonstrated statistically significant improvement in arm girth at 60 days (change –1.7 cm, p =0.050). More participants who were fully compliant experienced decreased arm girth than the partially compliant participants (91.7% [11/12] vs 40.0% [4/10]).
Both the partially compliant and fully compliant groups demonstrated statistically significant reductions ( p ≤0.001) from baseline in weight and BMI at both the 30- and 60-day follow-ups. There were no significant differences between compliance groups.
Improvements in the overall score for the LYMQOL-ARM questionnaire at 60-days were observed in 70% (7/10) of partially compliant and 50% (6/12) of completely compliant participants, as well as in the 1 noncompliant participant. By regression analysis, there were no significant differences within or between groups for the overall score at either time point. Only the functional domain score demonstrated a significant improvement over baseline for the fully compliant group at 60 days (change –0.38, p =0.030).
LSIDS-A overall scores were improved at 60-days for 77.8% (7/9) of the partially compliant participants (1 participant did not complete all questionnaire items) and 91.7% (11/12) of the completely compliant participants. By regression analysis, the change from baseline to 60-day follow-up for the fully compliant group was statistically significant for the soft tissue sensation (–1.6, p =0.037), neurological sensation (–1.8, p =0.013), and overall domains (–1.0, p =0.021) of the LSIDS-A. Changes from baseline for the partially compliant group did not reach statistical significance at either time point for any domain score.
For the SF-36 questionnaire, there were no statistically significant differences in any domains within or between compliance groups for either follow-up time point by regression analysis.
Adverse events
During the study period, 7 device-related adverse events occurred in 5 participants. Events in the test group were 1 case each of suspected cellulitis, tenderness, and maculopapular rash. Events in the control group were 1 case each of clicking in thumb joint, bilateral buttock pain, worsening lymphedema, and exacerbation of arm pain. None of the events were serious and all were mild to moderate in severity. | Discussion
Our study findings indicate that the text reminders did not improve treatment compliance because the BCRL patients were already highly treatment compliant. Overall, there were significant improvements in mean weight and BMI at the 30-day visit and in arm girth at the 60-day visit. There were significant improvements in several lymphedema-specific QOL and symptom severity measures and 30- and/or 60-day follow-ups and the SF-36 pain domain score was significantly improved at both the 30- and 60-day follow-ups.
Because we did not find any differences in compliance by the randomized study groups for the primary endpoint, the data were pooled and outcomes evaluated by compliance status (complete vs partial compliance). There were no differences between compliance groups for weight, BMI, and SF-36 scores, and LYMQOL-ARM scores except the functional domain. Compared to the partially compliant group, the fully compliant group experienced significantly greater improvements in arm girth, LYMQOL-ARM functional domain score, and in the LSIDS-A overall, soft-tissue sensation, and neurological sensation scores. Although even partial compliance is beneficial, these findings support the effort to encourage full treatment compliance among BCRL patients in order obtain the optimal benefits the therapy offers.
The strengths of this study include the randomized assignment and comparison of participants who received or did not receive text reminders for treatment compliance. Additionally, we measured compliance through data automatically received from the PCD device, mitigating recall issues of patient-reported compliance. We also included comparisons of validated patient-reported outcomes to evaluate changes in quality of life and symptomology related to treatment compliance.
One obvious limitation of the study is the small sample size. This study was designed as a small feasibility study to further inform additional studies in the future. Additionally, there is the potential for selection bias since all study participants were required to have the capability to receive text messages at their treatment location. This requirement effectively eliminated any patients who had limited internet or phone accessibility issues. Although we were interested to see if there were any differences in outcomes across racial and ethnic variables, our population was predominantly White, making it difficult to meaningfully assess for the presence of health disparities among those who met eligibility criteria. That said, we did see a trend toward higher rates of full compliance among Black study participants compared with White participants. Larger studies with more diverse populations may help to delineate if any equity issues exist with access to and compliance with PCD therapy.
Finally, in hindsight, the selection of the BCRL population was not the most effective group on which to test this technology since the population is already highly compliant without receiving reminders. Future studies in other populations of patients with lymphedema may demonstrate different results. | Conclusion
Automated text reminders did not improve compliance in patients with BCRL as compliance rates were already high in this patient population across racial and ethnic variables. Improvements in weight, BMI, arm girth, disease-specific quality of life, and symptom severity measures were observed regardless of the treatment assignment. Full compliance resulted in greater functional and QOL benefits. | Purpose
Do cell phone text reminders impact the rate of compliance with pneumatic compression device (PCD) therapy among women with breast cancer-related lymphedema (BCRL)?
Methods
A prospective, randomized, 2-group feasibility study conducted at 2 centers. Participants were adult females (≥18 years old) with unilateral BCRL who had the capability of receiving reminder text messages. All participants underwent PCD therapy. Participants were randomized 1:1 to control (no text messages) or test group (received text message reminders if the PCD had not been used for 2 consecutive days). The rate of compliance between treatment groups was the main outcome measure. Secondary outcome measures were changes in arm girth, quality of life (QOL), and symptom severity.
Results
Twenty-nine participants were enrolled and randomized, 25 were available for follow-up at 60 days (14 test, 11 control). Overall, 52.2% (12/23) of all participants were completely compliant, an additional 43.5% (10/23) were partially compliant, and 1 patient (4.3%) was noncompliant. The test and control groups did not differ in device compliance. In the pooled population, weight, BMI, and arm girth were improved. Overall disease-specific QOL and symptom severity were improved. Regression analysis showed benefits were greater among participants with higher rates of compliance.
Conclusions
Automated text reminders did not improve compliance in patients with BCRL as compliance rates were already high in this patient population. Improvements in weight, BMI, arm girth, disease-specific quality of life, and symptom severity measures were observed regardless of the treatment assignment. Full compliance resulted in greater functional and QOL benefits.
Trial registration
The study was registered at www.clinicaltrials.gov (NCT04432727) on June 16, 2020.
Keywords | Acknowledgements
We thank Marc Schwartz for the statistical analysis and the other investigators: Sarah Pesek, MD, St. Peters’ Health Partners, Troy, NY and Nicolas Ajkay, MD, University of Louisville, Louisville, KY. We also thank the patient participants and research team at St. Peter’s Health System.
Author contribution
SM contributed to the study conception and design. Material preparation and data collection were performed by SM. Analysis and interpretation of the results were performed by SM and EMO. The first draft of the manuscript was written by EMO. All authors commented on all versions of the manuscript. All authors read and approved the final manuscript.
Funding
This study was funded by Tactile Medical.
Declarations
Ethics approval
Approval was granted by Western IRB (Puyallup, WA) and University of Louisville Human Subjects Protection Program Office (Louisville, KY). This study was performed in line with the principles of the Declaration of Helsinki.
Informed consent
Written informed consent was obtained from all individual participants included in the study.
Competing interests
EMO was paid by Tactile Medical to assist the lead author with the writing and preparation of the manuscript. SM has no conflicts to report. | CC BY | no | 2024-01-16 23:34:58 | Support Care Cancer. 2024 Dec 16; 32(1):33 | oa_package/bd/86/PMC10724087.tar.gz |
|
PMC10724324 | 38100037 | Introduction
Lymphocyte-specific protein tyrosine kinase (LCK) is critical for T-cell development and activation. LCK is recruited to the T-cell antigen receptor (TCR) after ligation of the TCR by peptide:MHC complexes on the surface of antigen-presenting cells [ 3 , 4 ]. LCK-dependent phosphorylation of immunoreceptor tyrosine-based activation motifs (ITAMs) in the cytoplasmic tails of CD3δ, -ε and -ζ chains create docking sites for ζ -chain associated protein kinase 70 (ZAP70) [ 5 , 6 ]. LCK next phosphorylates ITAM-bound ZAP70, which transduces the signal to the linker of activated T-cells (LAT). LAT, together with SRC homology 2 (SH2) domain-containing leukocyte protein of 76kDa (SLP76), forms a signal amplification and diversification hub, ultimately leading to T-cell activation [ 7 , 8 ]. Besides the CD3-chains and ZAP70, LCK phosphorylates additional members of the TCR signaling network, such as protein kinase Cθ (PKCθ) and interleukin-2 inducible T cell kinase (ITK), and is involved in signaling downstream of other receptors, most prominently the co-stimulatory molecule CD28 [ 9 ], all together contributing to the central role of LCK in T-cell biology.
LCK differs from other SRC-family kinases (SFKs), such as FYN and SRC, in its N-terminal SH4 domain targeting LCK to the plasma membrane and the unique domain (UD) mediating (weak) interaction with the co-receptors CD4 and CD8 [ 10 , 11 ]. SH3 and SH2 domains, providing docking sites for intra- and intermolecular interactions, are connected by a linker region to the kinase domain (KD) and a C-terminal unstructured tail. Inactive LCK, phosphorylated on Y505 by the C-terminal SRC Kinase (CSK) [ 12 ], adopts a closed conformation with an intramolecular interaction of pY505 with the SH2 domain [ 13 ]. De-phosphorylation of pY505 by CD45 [ 14 – 16 ] opens LCK into a primed state, which allows trans-autophosphorylation of Y394 resulting in full LCK kinase activity [ 17 ].
LCK is found both free and co-receptor bound with varying distribution between CD4 + and CD8 + T-cells and their differentiation stages [ 18 ], likely serving different purposes [ 19 – 21 ]. Despite CD4 and CD8 co-receptors being structurally dissimilar, the non-covalent interaction of both coreceptors with LCK is mediated by two conserved cysteines forming a Zn 2+ -clasp structure [ 10 ]. Importantly, while CD4 is present as a monomer, CD8 forms CD8αβ hetero- or CD8αα homodimers, in conventional or unconventional T-cells, respectively, where LCK is only bound to CD8α [ 22 ].
Besides LCK, a T-cell-specific isoform of FYN (FYN-T) and SRC are expressed in T-cells, albeit the latter only at low levels. Despite a high similarity between FYN-T and LCK, their functions are non-redundant, partly owing to different subcellular localization determined by their different SH4 domains [ 3 ]. Nonetheless, FYN-T can compensate for some LCK functions. While lck -/- mice have an almost complete block in thymic positive selection, some peripheral mature T-cells develop [ 23 ]. In contrast, in lck -/- fyn -/- mice, no mature αβ T-cells are formed [ 24 ].
Owing to its critical role in T-cell biology, LCK loss of function (LOF) is expected to lead to a profound T-cell deficiency. Until now, only one case of complete LCK deficiency has been described [ 1 ]. A child suffering from recurrent, severe infections and immune dysregulation was found to have a biallelic missense mutation in LCK (c.1022T>C), resulting in an amino acid substitution in the kinase domain (p.L341P) with low protein expression and complete loss of LCK kinase activity. More recently, Li et al. described a homozygous splice site mutation in LCK (c.188-2A>G), predicted to affect the 3’ splice acceptor site of LCK exon 4, in a consanguineous family presenting with a partial CD4 + T-cell defect, and susceptibility to human papillomavirus (HPV) infections with atypical epidermodysplasia verruciformis (EV), as well as recurrent pneumonia [ 2 ]. Further, two reports of immune deficiency with reduced LCK protein expression and the presence of an aberrant isoform of LCK mRNA missing exon 7 have been published. However, pathogenic mutations in LCK were not identified in these reports, and signaling studies were not in line with defective LCK function [ 25 , 26 ]. Lastly, beyond LCK, various inborn errors of immunity (IEIs) have been reported with deficiencies of proteins directly or indirectly activated by LCK, such as ZAP70 [ 27 ], LAT [ 28 ], SLP76 [ 29 ], ITK [ 30 , 31 ] or components of the TCR-CD3 complex itself [ 32 – 35 ].
Here, we describe a novel complete biallelic LCK missense variant (c.1393T>C, p.C465R) in a patient with profound T-cell immune deficiency presenting with recurrent severe infections and characterize the molecular and cellular consequences of the variant for TCR signaling and T-cell function. | Methods
Human Samples
Blood samples were taken from the patient, relatives, and healthy volunteers who were treated at Mustafa Eraslan-Fevzi Mercan Children’s Hospital at Erciyes University and analyzed there and at the Dr. von Hauner Children’ Hospital at Ludwig-Maximilians-Universität München. Informed consent was obtained from both parents. This study was approved by Erciyes University local ethics committee (permit number: 2018/388) and conducted according to current ethical and legal guidelines and the Declaration of Helsinki.
Sequencing
Whole exome sequencing (WES) was performed at Intergen NGS facility, Ankara, Turkey. DNA was isolated using a magnetic bead capture method (MagPurix, Zinexts). Exome enrichment was done using the Twist capture kit (TwistBiosciences). Sequencing was performed on a MGIseq DNBSEQ-G400 (MGI Tech Co.). Data was analyzed and interpreted following the ACMG criteria [ 36 ]. PCR amplification was performed via in-house designed primers. Amplicons were checked by 2% agarose gel electrophoresis. Confirmation sequencing was performed by the next-generation sequencing method by Miseq-Illumina equipment (Illumina, San Diego, CA, USA) according to manufacturers’ instructions. Data were evaluated with IGV 2.3 (Broad Institute) software. Sanger sequencing for genotype confirmation was done using the following primers: 5′-ACCTCTAGTGTGACCTTACCA-3′ (forward), 5′-GCAGAGTCCACGCAACTACA-3′ (reverse) following standard protocols.
Lymphocyte Isolation and Cell Culture of T-Cell Blasts
Peripheral blood mononuclear cells (PBMCs) were isolated by density gradient centrifugation from peripheral blood samples using Ficoll-Paque Plus (Cytiva). PBMCs were frozen in human serum supplemented with 10% dimethyl sulfoxide (DMSO) in liquid nitrogen.
Primary T-cell blasts were generated from PBMCs of the patient and healthy controls (HCs). Frozen PBMCs stored in liquid nitrogen were quickly thawed and resuspended in prewarmed complete RMPI 1640 Glutamax (Invitrogen) supplemented with 10% FCS (Invitrogen) and Penicillin/Streptomycin 100U/ml (Invitrogen). PBMCs were stimulated with 5ng/ml phorbol-12 myristate-13-acetate (PMA) (Sigma), 1μM ionomycin (Sigma), and 200U/ml IL-2 (Novartis). After 2 days, cells were washed and cultured in complete RPMI with 100U/ml IL-2.
Cloning and Plasmids
The 2nd generation lentiviral plasmid pCDH-CMV-insert-EF1a-LNGFR was provided by Dr. Thomas Magg, and the 3rd generation lentiviral plasmid pLJM1-EGFP[ 37 ] (Addgene #19319) was provided by Daniel Petersheim. The lentiviral helper plasmids psPAX2 (Addgene #10703) and pMD2.G (Addgene #12259) were provided by Dr. Oreste Acuto. Full-length LCK gene was amplified from the cDNA of healthy control PBMCs and subcloned into pCDH-CMV-insert-EF1a-LNGFR or pLJM1-EGFP. LCK-C465R was produced by site-directed mutagenesis using Q5 polymerase (New England Biolabs). A C-terminal HA-Tag was introduced into pCDH-LCK WT and C465R by PCR with a reverse primer specific for the C-terminus of LCK fused to a HA-Tag. All constructs were verified by Sanger sequencing.
Cells, Transfections, and Lentiviral Transductions
Cell lines were maintained at 37°C with 5% CO 2 in a humidified incubator. Jurkat cells (clone E6.1, ATCC TIB-152), LCK-deficient Jurkat cells (J.CaM1.6, ATCC CRL-2063), and derived cell lines were maintained in RPMI 1640 Glutamax (Gibco) medium supplemented with 10% FCS (Invitrogen). The human embryonic kidney epithelial cells (HEK293) derivative Lenti-X 293T cells (Takara, Cat-No. 632180) were maintained in DMEM (Gibco) supplemented with 10% FCS and 4mM Glutamax (Gibco). Lentiviral particles were produced in Lenti-X 293T cells by co-transfection of the transfer plasmids pCDH or pLJM1 with the packaging plasmids psPAX2 and pMD2.G complexed with polyethylenimine (PEI, linear MW25K, Polyscienes, Cat-No. 23966-100). Forty-eight hours after transfection, viral supernatants were harvested, filtered, and used for the transduction of J.CaM1.6 cells in the presence of 5mg/ml polybrene. Twenty-four hours post-infection, cells were washed and re-suspended in RPMI 10% FBS. Forty-eight hours post-infection, puromycin selection was started on cells transduced with pLJM1 which contains a puromycin-resistance gene.
Stimulation for Immunoblotting
Cells were rested for 15 min on ice in RPMI 0% FCS. For anti-CD3/CD28 stimulation, cells were incubated for 15 min on ice with 1μg/ml soluble anti-CD3 (clone OKT3, BioLegend) with or without 5μg/ml anti-CD28 (clone CD28.2, BioLegend) and washed once with RPMI 0% FCS. In primary cells, antibodies were cross-linked with 10μg/ml goat anti-mouse IgG (BD) for 15 min on ice. In Jurkat cells, crosslinking was not performed. Stimulation was initiated by shifting cells to 37°C for the indicated times (2, 5, 15 min). Alternatively, cells were stimulated for 5 min at 37°C with 10ng/ml PMA and 1μM ionomycin or left untreated. The specific LCK inhibitor A770041 (Axxon Medchem) served as a negative control in healthy control cells. Cells were incubated with 10μM A770041 at 37°C for 10 min. After stimulation, cells were centrifuged at 4°C, and pellets were lysed immediately.
Cell Lysates and Immunoblotting
Stimulated or unstimulated cells were pelleted at 4 °C, and pellets were vigorously resuspended in ice-cold complete lysis buffer (50mM Tris-HCl (pH 7.6), 150mM NaCl, 10mM NaF, 1mM Na 3 VO 4 , 1% Triton X-100, proteinase inhibitor) for 15 min on ice. Lysates were cleared by centrifugation at 15,000×g, 4°C for 15 min. The leftover cleared lysates were boiled with Laemmli sample buffer (Sigma). For whole cell lysates of Jurkat E6.1, J.CaM1.6 and transduced cells for immunoblots for LCK expression, total protein was quantified by BCA Protein Assay Kit (Thermo Scientific). Lysates were separated by SDS-PAGE and transferred onto a nitrocellulose membrane. After a blocking step with 3% BSA TBS-T, immunoblotting was performed with the following antibodies: mouse anti-human FYN (clone E-3), mouse anti-human GAPDH (clone 6C5), mouse anti-human LCK (clone 3A5), mouse-IgGκ BP-HRP, rabbit-IgGκ BP-HRP, anti-mouse-IgG-HRP (all Santa Cruz Biotechnology) or rabbit anti-human pZAP70 pY319 (clone 65E4), rabbit anti-human polyclonal pSRC-family kinase (pSFK) pY416, rabbit anti-human pERK1/2 pT202/pY204 (clone 197G2), and rabbit anti-HA-tag (clone C29F4) (all Cell Signaling Technologies).
Surface and Intracellular Antigen Flow Cytometry Staining
0.5 Mio cells were harvested and washed twice with 1ml FACS buffer (PBS + 2% FCS). Cells were fixed with 150μl pre-warmed fixation solution (BD Cytofix®, BD Biosciences) for 10 min at 37°C. For staining of intracellular antigens, fixed samples were washed twice in 150μl permeabilization buffer (BD Perm/Wash I, BD Biosciences), re-suspended in 150μl permeabilization buffer, and incubated at RT for 30 min. Permeabilized cells were stained in 50μl permeabilization buffer containing the respective antibody dilution. For fluorescent-conjugated primary antibody staining, samples were incubated for 2h at 4°C. When fluorescent-conjugated secondary antibodies were used, they were diluted in 50μl permeabilization buffer and added to cells for 30 min at RT in the dark. Cells were washed 3 times with 1ml permeabilization buffer after each staining and twice with 1ml FACS buffer before surface staining. For staining of surface-expressed LNGFR, mouse anti-LNGFR-PE was added in 50μl of FACS buffer and stained for 30 min at RT in the dark. Cells were washed 3 times in FACS buffer, and samples were acquired at a LSRFortessa flow cytometer (BD Biosciences). The following antibodies were used:
Calcium-Flux Assay
Cells were harvested and rested for 2 h at 37°C, 5% CO 2 in RPMI 1640 Glutamax (Gibco) 10mM Hepes without FCS. Cells were labeled with 5μM Indo-1-AM (Invitrogen) in RPMI 10mM Hepes for 30 min in the dark at 37°C, 5% CO 2 . After 30 min, RPMI 10mM Hepes 5% FCS was added to remove excess Indo-1-AM. After 30 min, cells were washed twice in RPMI 5% FCS and stained with anti-LNGFR-PE for 30 min in RPMI 5% FCS at RT in the dark. Cells were washed twice in 5ml RPMI 5% FCS and resuspended in RPMI 5% FCS before being acquired at a LSRFortessa flow cytometer (BD Biosciences). For calcium-flux measurement, cells were acquired for 1 min to record the baseline before addition of mouse anti-CD3 (BioLegend, clone OKT3) and anti-CD28 (clone CD28.2, BioLegend) both 2μg/ml followed by crosslinking with 4μg/ml goat anti-mouse IgG after 2 min. After 9 or 10 min, 1μM ionomycin was added to achieve maximum calcium-flux. Data were analyzed using FlowJo V9 (TreeStar).
Immunophenotyping of Peripheral Blood Mononuclear Cells
Patient’s and healthy control PBMCs were thawed and washed with PBS (Gibco). An antibody master mix was prepared in BD Brilliant stain buffer (BD Biosciences), and samples were stained for 15 min at RT. Sample acquisition was performed on a LSRFortessa flow cytometer (BD Biosciences), and data were analyzed using FlowJo V9 (TreeStar). The following antibodies were used:
Fixable viability stain 780 (BD Bioscience) was used for viability staining. | Results
Case Description of a Girl with Profound T-Cell Immune Deficiency
The female patient was born at term to consanguineous parents of Syrian descent (Fig. 1 A). There were 4 healthy siblings and one sister, who had died of respiratory failure due to a fulminant respiratory infection at the age of 7 months. The patient’s postnatal presentation, including body weight, body height, and head circumference, was unremarkable. Newborn screening had not been performed, because the patient did not have access to a newborn screening program. She developed normally until the age of 6 months, when she was admitted to a community hospital with fever and coughing, a diffuse maculo-papular rash, and oral and perianal candidiasis. After 1 month, she was transferred with progressive respiratory failure to Erciyes University, Kayseri, for further evaluation and treatment. The chest X-ray showed diffuse bilateral infiltrates (Fig. 1 B). At the time of transferal, inflammation markers and leukocyte count were normal, but she had lymphocytopenia [lymphocytes 1,550/mm 3 (N. 4,000–13,500)] and thrombocytosis [platelets 439,000/mm 3 (N. 310,000 ± 68,000)]. Suspicion of an IEI was raised, and flow cytometric analysis of lymphocyte subsets revealed T-cell lymphocytopenia with an absolute reduction in helper and cytotoxic T-cells, while B- and NK-cells numbers were normal [CD3 + T-cells, 667/mm 3 (N. 2,400–8,100); CD4 + T-cells, 289/mm 3 (N. 1,400–5,200); CD8 + T-cells, 332/mm 3 (N. 600–3,000)] (Table 1 , row 1). High viral loads of cytomegaly virus (CMV) (6.2 × 10 6 copies/ml), Epstein-Barr virus (EBV) (4.5 × 10 3 copies/ml), and adenovirus (ADV) (16.4 × 10 6 copies/ml) were detected in the blood and ganciclovir, and cidofovir treatment was initiated. In addition, because the patient had received BCG vaccination at the age of 2 months, treatment with rifampicin and isoniazid was started, but discontinued after 2 months because of hepatitis with elevated liver function tests (LFTs) (AST 1,166 U/l; ALT 632 U/l; GGT, 218 U/l; LDH 1,305 U/l). LFTs gradually decreased to baseline over the course of 4 weeks. While EBV and ADV became undetectable after 2 weeks of treatment, CMV viral load was still elevated after 40 days (3.6 × 10 6 copies/ml). After exclusion of ganciclovir resistance, foscarnet was added, and after 2 weeks of consecutive combined treatment, a decline of CMV viral load could be observed (427 copies/ml). Overall, her clinical condition stabilized such that after 3 months of anti-infective treatment, she could be discharged with trimethoprim-sulfamethoxazole, fluconazole, and acyclovir prophylaxis and was scheduled for control at the pediatric hematology-oncology outpatient clinic. Allogeneic hematopoietic cell transplantation was offered but declined by the caregivers. Unfortunately, the family was lost to follow-up, and the patient died at the age of 12 months, 4 months after discharge, most probably due to respiratory failure following severe pneumonia.
A Novel LCK Variant Impairs LCK Protein Expression and Proximal TCR Signaling
The patient’s clinical presentation and immunological phenotype was suggesting a profound T-cell immune deficiency. Whole exome sequencing revealed a novel homozygous missense variant in exon 13 of LCK (HGNC:6524; c.1393T>C, p.C465R). Sanger sequencing confirmed a biallelic mutation in the patient, and both parents were found to be heterozygous supporting an autosomal-recessive inheritance pattern in this consanguineous family (Fig. 1 C). Three of four healthy siblings (II.2, II.4, II.5) were heterozygous carriers, while one sister (II.3) was homozygous wildtype (Fig. 1 C). The LCK variant was not present in the genome aggregation database (gnomAD) and predicted to be pathogenic by a combined annotation-dependent depletion (CADD) score [ 42 ] of 26.1 and a mutation significance cut-off (MSC) score [ 43 ] of 3.13. Only very few homozygous variants in LCK are found in gnomAD (Fig. 1 D), none of them having similarly high CADD scores and low allele frequencies, and even heterozygous variants are rare (Fig. S1A ). The gnomAD pLI score, which is a measurement of LOF-intolerance, is 0.99 for LCK, with a pLI ≥ 0.9 being LOF-intolerant [ 44 ], and a low observed/expected ratio (o/e) for pLOF variants (o/e = 0.067; 90% CI 0.027–0.21), suggestive of a haploinsufficient gene, is calculated [ 45 ]. For comparison, the pLI score for LOF variants in ZAP70 was 0.88 and the o/e ratio was 0.17 (90% CI 0.09–0.36)
The LCK variant resulted in an exchange of a small neutral cysteine to a large basic arginine (C465R) in a highly conserved region towards the end of the C-terminal lobe (C-lobe) of the KD [ 11 ] (Fig. 1 E and F). C465 was not only conserved across species (Fig. 1 G), but also in all members of the SRC kinases family (Fig. S1B ), implying an important role for protein stability and/or function. In both open (PDB ID: 3LCK; Fig. 1 F) and closed (PDB ID: 2pI0; Fig. S1C ) forms of LCK, the sulfhydryl group of C465 forms a hydrogen bond with the neighboring residue P466, which in turn forms a hydrogen bond with Y469 in the α-helix H, and the introduction of an arginine was expected to disturb these interactions.
Immunoblotting for LCK in T-cell blasts from the patient and a healthy control revealed absent LCK protein expression in the patient (Fig. 2 A, orange arrowheads), suggesting that C465R influenced either protein expression or stability, or both. Active LCK is phosphorylated on Y394, corresponding to Y416 of SRC. As expected, immunoblotting with an anti-SFK pY416 antibody showed almost complete loss of active SRC-family kinases corresponding to the absent active LCK doublet in the patient T-cells (Fig. 2 B, red arrows). A minor band detectable most likely represented residual FYN-T expression in patient T-cells (Fig. 2 B, black arrow). Consistent with the absence of LCK protein expression, phosphorylation of ZAP70 was absent after stimulation with either anti-CD3 or anti-CD3/CD28 as compared to a healthy control (Fig. 2 B). Very low residual levels of pERK1/2 (loading corrected pERK1/2 band intensity: HC unstimulated = 1, patient unstimulated = 0.2) were detectable in unstimulated patient cells that did not increase after stimulation with either anti-CD3 or anti-CD3/CD28 (Fig. 2 B). Importantly, although LCK and ZAP70 phosphorylation was not rescued by stimulation with PMA/ionomycin (P/I), pERK1/2 levels were normal after P/I stimulation in patient T-cells as compared to the healthy control (loading corrected pERK1/2 band intensity: HC unstimulated = 1, HC P/I = 5.8, patient unstimulated = 0.2, patient P/I = 5.8) and inhibition with the LCK-specific inhibitor A770041 abrogated ZAP70 and ERK1/2 phosphorylation in T-cells of the healthy control (Fig. 2 B). Collectively, these results suggested that the LCK C465R variant most severely affected LCK protein expression and LCK-dependent proximal TCR signaling in patient T-cells.
To verify the impact of the mutation on LCK expression and/or function in a model system, LCK wildtype (WT) or LCK C465R with or without a C-terminal HA-tag was expressed in LCK-deficient J.CaM1.6 Jurkat cells (J.CaM LCK WT or C465R) that were transduced with a lentiviral plasmid containing also the extracellular domain of the low-affinity nerve growth factor receptor (LNGFR, CD271) to identify transduced cells, or a puromycin-selectable plasmid (pLJM1). As in the patient cells, LCK C465R was poorly expressed in J.CaM1.6 Jurkat cells (Fig. 2 C, orange arrowheads, and Figs. S2A and S2B). The defect in mounting a sufficient TCR signaling response was verified by Ca 2+ -flux measurements showing no response in J.CaM LCK C465R as opposed to J.CaM LCK WT (Fig. 2 D and Figs. S2B and S2C). Immunoblotting for pY416 SFK, pZAP70, and pERK1/2 after stimulation with either anti-CD3 or anti-CD3/CD28 showed absent ZAP70 and LCK and ERK1/2 phosphorylation in J.CaM LCK C465R cells as compared to J.CaM LCK WT, while stimulation with P/I induced similar levels of pERK1/2 in J.CaM LCK C465R and J.CaM LCK WT (Fig. 2 E and Fig. S2E), corroborating the defect seen in patient cells.
Aberrant Immune Phenotype of Bi- and Monoallelic LCK-Deficient T-Cells
To further characterize the impact of the LCK C465R variant, we performed immune phenotyping of cryopreserved PBMCs from the patient and her mother by flow cytometry. Two adult healthy controls were used alongside. We confirmed a severe loss of total CD3 + T-cells (patient 19.4% of CD45 + lymphocytes, mother 62%, HC1 58.6%, HC2 73.9%) and CD4 + T-cells (patient 1.81% of CD3 + lymphocytes, mother 23.0%, HC1 44.5%, HC2 55.2%) in the patient (Fig. 3 A, upper row). CD8 + T-cells numbers were less affected, resulting in an inversed CD4/CD8 ratio of 0.02 (Fig. 3 A, upper row). In addition, patient T-cells exhibited a reduction of CD4 and CD8 co-receptor surface expression, which has previously been described as pathognomonic for murine and human LCK deficiency [ 1 , 23 ] (Fig. 3 A and Figs. S3A and S3B ). Unexpectedly, the heterozygous mother also displayed a reduction in CD4/CD8 ratio (0.36) and CD4 and CD8 co-receptor expression (Fig. 3 A and Figs. S3A and S3B ). Besides the decrease of CD4 + and CD8 + T-cells, the patient also showed a reduction of αβT-cell frequency with an increase of γδT-cells (Fig. 3 A, lower row) expressing higher levels of TCRγδ on the surface as compared to healthy controls (Fig. S3B ).
An almost complete loss of naïve CD4 + and CD8 + T-cells (1.67% and <1%, respectively) could be observed in the patient (Figs. 3 B and C, upper rows). Most patient CD4 + T-cells were CD45RO + CCR7 - effector memory T-cells (EM: 70.3%), but we also noted an increase in the terminally differentiated CD45RO - CCR7 - population (TEM: 3.2 % vs HC1: 0.52% and HC2: 0.072%) that has been described to expand in chronic virus infections [ 46 ] (Fig. 3 B). Patient CD8 + T-cells were evenly distributed between the effector memory and terminally differentiated effector cell compartment (EM: 48.2%; TEM: 48.3%) (Fig. 3 C). Further, both CD8 + and CD4 + CD45RO + memory T-cells in the patient were displaying an unusual CD38 hi HLA-DR hi double positive state, most likely reflecting exhaustion [ 47 – 50 ] (Fig. S3C ). As with co-receptor expression, we observed that the T-cells of the heterozygous mother displayed an intermediate phenotype between the T-cells of the patient and the two healthy controls with partial loss of naivety of both CD4 + and CD8 + T-cells (19% and 6.7%, respectively) and a concurrent increase in CD4 + EM T-cells, while CD8 + T-cells were equally distributed between EM and TEM (Figs. 3 B and C). Additionally, CD57, a marker of chronic immune activation and senescence [ 51 ], was increased on both CD4 + and CD8 + T-cells in the patient (36.3% and 82.8%, respectively), but also in the mother (14.9% and 70.7%, respectively) (Fig. 3 B and C, lower rows). Additionally, CD8 + T-cells had almost no CD28 surface expression, another sign of terminal differentiation and chronic immune activation and senescence [ 52 , 53 ] (Fig. S3D ).
Further, analysis of CD4 + helper T-cell subsets revealed a decrease of CCR6 - CXCR3 + CCR4 - Th1 cells in the patient and the mother (4.31% and 6.51%) (Fig. 3 E). The percentage of CCR6 - CXCR3 - CCR4 + Th2 cells in the patient was like the healthy controls (Fig. 3 E). Within the CCR6 + T-cell population, percentages of CCR4 + Th17 cells were comparable between patient, mother, and healthy controls (37.5%, 42%, 47.8% and 44.6%, respectively). However, because of a 50% reduction in CD4 + CCR6 + T-cells (Fig. S3E , upper row), the absolute number of Th17 cells was decreased in the patient. Furthermore, both patient and mother had very low percentages of CCR6 + CXCR3 + Th17.1 cells (5.0 and 8.43%) (Fig. S3E , lower row).
CD4 + CD25 hi CD127 lo Treg cell frequency was elevated in the patient (20.2%), although we noticed that CD25 expression within the Treg gate was relatively dimmer as compared with the healthy controls, and thus activated conventional T-cells may have been included in this gate [ 54 ] (Fig. 3 F). Patient Tregs showed either an activated (CD45RO + HLA-DR + ) or memory (CD45RO + HLA-DR - ) phenotype (Fig. S3F ).
Taken together, the LCK-deficient patient showed a severe absolute loss of total CD4 + and to a lesser extent of CD8 + T-cells with a relative increase in γδT-cells and Treg cells and a decrease of Th1 cells. Remaining CD4 + and CD8 + T-cells, including Treg cells, showed an activated, memory phenotype with an increase in CD57 and loss of CD28, indicative of immune senescence. Importantly, surface expression levels of the CD4 and CD8 co-receptors were reduced. Of note, the heterozygous mother showed an intermediate T-cell immune phenotype. | Discussion
In the current study, we identified a novel biallelic missense LCK c.1393T>C, p.C465R variant in a patient from a consanguineous Syrian family with profound T-cell immune deficiency characterized by complete LCK protein expression deficiency and ensuing proximal TCR signaling- and CD4 and CD8-co-receptor-mediated functional and phenotypical defects. Both parents as well as three of four healthy siblings were monoallelic carriers of the LCK variant. Another child not amenable to analysis had passed away following a severe infection in the first year of life. Clinically, the patient presented in infancy with severe infections and passed away at the age of 12 months due to respiratory failure likely secondary to an infection, precluding more in-depth experimental analysis of primary T-cells.
We detected very low levels, if any, of the variant LCK by immunoblotting in patient cells and LCK-deficient J.CaM1.6 cells transduced with LCK C465R, suggesting reduced protein expression or stability, or both. However, we do not know the exact mechanism that led to reduced LCK protein expression, i.e., whether protein translation or folding was disturbed, or if the protein was subject to faster degradation. It is likely that the replacement of a small neutral cysteine with a larger basic arginine disturbed the local configuration with possible consequences beyond the local structure. Interestingly, the adjacent α-helices H and I form part of the interface of c-SRC with the kinase CSK that phosphorylates the inhibitory tyrosine Y527 (Y505 in LCK) [ 55 ]. For a better understanding of the structural changes induced by C465R and potential functional consequences, further analysis with molecular dynamic simulation of the variant compared to wildtype LCK could be employed.
To our understanding, this is the third LCK variant leading to a LOF phenotype to be described in the biomedical literature [ 1 , 2 ]. The first reported LCK deficiency was due to a pathogenic LCK c.1022T>C missense variant, leading to the residual expression of a signaling incompetent LCK p.L341P with abrogated protein tyrosine phosphorylation and Ca 2+ -flux (Table 1 , row 3) [ 1 ]. The patient presented early in life with profound T-cell immune deficiency characterized by severe infections, autoinflammation, autoimmunity, and ensuing failure to thrive. Li et al. reported 3 siblings of a consanguineous family presenting with recurrent pneumonia and severe viral skin disease leading to malignant transformation [ 2 ]. The patients had an intronic LCK c.188-2A>G splice site variant resulting in skipping of exon 3 and mRNA decay. Although the impact on LCK protein level, TCR signaling, and T-cell immune phenotype was not reported, genetic and clinical data alongside with CD4 + T-cell lymphocytopenia suggested a hypomorphic LCK deficiency (Table 1 , row 2).
Two additional studies have shown defective LCK protein expression in the context of severe combined immune deficiency (SCID) and common variable immune deficiency (CVID), respectively [ 25 , 26 ]. Goldman et al. reported a boy presenting with SCID with severe infections and profuse diarrhea (Table 1 , row 5) [ 25 ]. He was lymphocytopenic with a greater reduction in CD4 + than CD8 + T-cells and reduced B-cells with hypogammaglobulinemia. CD8 + T-cells had absent CD28 surface expression and reduced LCK expression; however, TCR proximal protein tyrosine phosphorylation, Ca 2+ -flux and ERK-phosphorylation were unperturbed. cDNA analysis from PBMCs revealed the presence of LCK wildtype cDNA and an additional LCK transcript lacking exon 7 ( LCK ΔExon7 ) that genetically and mechanistically remained unexplained. The adult CVID patient reported by Sawabe et al. had almost asymptomatic bihilar lymphadenopathy, reduced CD4 + T-cells, class-switched memory B-cells, and IgG (Table 1 , row 4) [ 26 ]. Two LCK transcripts, corresponding to LCK wildtype and to LCK ΔExon7 , were detected in the patient’s PBMCs, and reduced LCK protein expression was noted. Exon 7 encodes a part of the kinase domain including the ATP binding site necessary for kinase activity (NP_005347.3, aa 212-262). Indeed, Germani et al. [ 56 ] and we (Hauck and Latour, unpublished data) showed that LCK ΔExon7 protein or a variant with a mutation in the active site (K273E) lost its catalytic kinase activity. LCK ΔExon7 is the sole transcript expressed in the J.CaM1.6 cell line that lacks LCK protein expression and has been generated under continuous mitogenic stimulation with PHA from Jurkat E6.1 cells [ 4 , 57 ]. Low levels of the transcript coding for LCK ΔExon7 are detectable in the parental Jurkat E6.1 cell line, thus the chronic PHA stimulation might have given a selective survival advantage to the LCK protein-deficient cells. Furthermore, LCK ΔExon7 was shown to be expressed in PBMCs of healthy donors [ 58 , 59 ]. Overall, we conclude that in both cases, the expression of LCK ΔExon7 was probably not the cause, but the consequence of the underlying genetically unresolved SCID and CVID, respectively.
The immune phenotype of the LCK-deficient patient described here (LCK p.C465R) was very similar to that reported by Hauck et al. (LCK p.L341P) as well as to that of the lck -/- mouse model [ 23 , 60 ] with pronounced T-cell lymphocytopenia, inverted CD4/CD8 ratio, loss of T-cell naivety, exhausted memory phenotype in both CD4 + and CD8 + T-cells, and expansion of γδT-cells. However, we noted some differences between the two cases such as the percentage of Treg cells and the composition of immunoglobulin isotypes that may reflect either different functional consequences of the individual mutations or contact with different infectious agents and/or influences by additional genetic variants present in the patients. Immunologic workup of further LCK-deficient cases will help clarifying genotype-phenotype-correlations.
The immune phenotype of LCK deficiency reported here for the first time included T helper subsets showing a decrease of Th1, Th17, and Th17.1 cells. It is important to note, however, that due to the low frequency of CD4 + T-cells, the absolute numbers of events in the analysis of T helper subsets were low and analysis of further patients will give more clarity. The T helper subset phenotype is contrary to what has been reported from a mouse model with a post-thymic LCK gene deletion, which showed skewing towards Th1 responses in CD4 + T-cells [ 61 ], while peripheral T-cells in non-conditional lck -/- mice have not been analyzed in such detail. This may be due to the differences in the environment, in particular the chronic viral infections that the reported patient has suffered from.
A prominent feature of human and murine biallelic LCK deficiency is reduced surface expression of the co-receptors CD4 and CD8 on T-cells [ 1 , 23 ], and here we corroborated this finding. Additionally, we noted reduced co-receptor expression in an individual with monoallelic LCK deficiency, which previously has been described for CD4 in lck +/- mice [ 23 ]. While CD4 internalization and degradation require phosphorylation of S408 by PKCθ and dissociation of LCK from CD4 to enable recognition of a dileucine motif by the clathrin adaptor AP2 [ 62 – 64 ], the mechanism for CD8 is less well understood and the shorter cytoplasmic tail of CD8 is devoid of both serin and dileucine motifs. This is in line with a recent study of a variant LCK unable to bind co-receptor, reporting differential regulation of CD4 and CD8 surface expression [ 21 ]. Thus, reduced LCK expression may increase CD4 endocytosis or by other mechanisms impede stabilization and reduce CD4 surface expression. Further experiments are needed to clarify the causes of reduced co-receptor expression in LCK deficiency, but we propose that it may be of value for early detection of mono- and/or biallelic LCK deficiency.
Besides reduced co-receptor expression, the mother with monoallelic LCK deficiency had further immune phenotypic alterations such as loss of CD4 + T-cells with an inverse CD4/CD8 ratio, CD4 + and CD8 + T-cell loss of naivety, and exhaustion. These changes could also be a sign of chronic viral infections, such as CMV [ 65 ]. Unfortunately, we were not able to acquire clinical information or blood samples for further analyses of the entire family. It is noteworthy that as opposed to ZAP70 or ITK deficiency, only one other case of complete LCK deficiency has been reported [ 1 ] and only two more are found in the ClinVar Database. The homozygosity in the case presented by Hauck et al. was due to a rare uniparental disomy; thus, only the mother was heterozygous for the LCK mutation [ 1 ]. Taken together, this raises the possibility that heterozygosity for a LOF variant in LCK may lead to clinical manifestations and purifying selection. Importantly, a recent study reported impaired proximal TCR signaling (pCD3 ζ, pZAP70, total-pY) in lck +/- mice relative to WT mice [ 18 ]. Thus, it will be of interest to carefully screen individuals with immune dysregulation for monoallelic LCK deficiency in the future.
In summary, we report the second case of complete biallelic LCK deficiency causing profound T-cell immune deficiency with CD4 + and CD8 + T-cell lymphocytopenia and reduced CD4 and CD8 cell surface co-receptor expression. In individuals with suspicion for mono- or biallelic LCK deficiency co-receptor expression should be analyzed and could streamline immediate genetic workup. | Lymphocyte-specific protein tyrosine kinase (LCK) is an SRC-family kinase critical for initiation and propagation of T-cell antigen receptor (TCR) signaling through phosphorylation of TCR-associated CD3 chains and recruited downstream molecules. Until now, only one case of profound T-cell immune deficiency with complete LCK deficiency [ 1 ] caused by a biallelic missense mutation (c.1022T>C, p.L341P) and three cases of incomplete LCK deficiency [ 2 ] caused by a biallelic splice site mutation (c.188-2A>G) have been described. Additionally, deregulated LCK expression has been associated with genetically undefined immune deficiencies and hematological malignancies. Here, we describe the second case of complete LCK deficiency in a 6-month-old girl born to consanguineous parents presenting with profound T-cell immune deficiency. Whole exome sequencing (WES) revealed a novel pathogenic biallelic missense mutation in LCK (c.1393T>C, p.C465R), which led to the absence of LCK protein expression and phosphorylation, and a consecutive decrease in proximal TCR signaling. Loss of conventional CD4 + and CD8 + αβT-cells and homeostatic T-cell expansion was accompanied by increased γδT-cell and Treg percentages. Surface CD4 and CD8 co-receptor expression was reduced in the patient T-cells, while the heterozygous mother had impaired CD4 and CD8 surface expression to a lesser extent. We conclude that complete LCK deficiency is characterized by profound T-cell immune deficiency, reduced CD4 and CD8 surface expression, and a characteristic TCR signaling disorder. CD4 and CD8 surface expression may be of value for early detection of mono- and/or biallelic LCK deficiency.
Supplementary Information
The online version contains supplementary material available at 10.1007/s10875-023-01602-8.
Keywords
Open Access funding enabled and organized by Projekt DEAL. | Supplementary Information
| Acknowledgements
We thank the patient and her family for their personal contribution. We thank Gabriele Heilig for technical assistance with sequencing and the flow cytometry facility at the Dr. von Hauner Children’s Hospital for excellent technical support with immunophenotyping.
Author Contribution
Anna-Lisa Lanz, Serife Erdem, and Raffaele Conca performed experiments. Alper Ozcab, Gulay Ceylaner, Murat Cansever, Serdar Ceylaner, Turkan Patiroglu, and Ekrem Unal provided clinical information. Thomas Magg, Oreste Acuto, Sylvain Latour, Christoph Klein, Ahmet Eken, and Fabian Hauck conceptualized the study. Anna-Lisa Lanz and Fabian Hauck wrote the manuscript. Ahmet Eken and Fabian Hauck supervised experiments. All authors read and approved the final manuscript.
Funding
Open Access funding enabled and organized by Projekt DEAL. Fabian Hauck received funding from the Care-for-Rare Foundation (C4R, 160073), the Else Kröner-Fresenius Stiftung (EKFS, 2017_A110), and the German Federal Ministry of Education and Research (BMBF, 01GM2206D). Anna-Lisa Lanz received funding from the Care-for-Rare Foundation. This study was partly supported by grants from the Turkish Academy of Science GEBIP and Science Academy BAGEP awards to Ahmet Eken, and Erciyes University BAP grant TCD2021-10863 to Ekrem Unal.
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics Approval
This study was approved by Erciyes University local ethics committee (Permit number: 2018/388) and conducted according to current ethical and legal guidelines and in line with the Declaration of Helsinki.
Consent to Participate
Written informed consent was obtained from the parents.
Consent for Publication
The authors affirm that human research participants or their legal guardians provided informed consent for publication of the images in Figures 1 , 2 , 3 , and Supplementary Figures S1 - 4 .
Conflict of Interest
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:01 | J Clin Immunol. 2024 Dec 15; 44(1):1 | oa_package/72/40/PMC10724324.tar.gz |
|
PMC10728221 | 38110598 | Palliative care, with its focus on comprehensive patient assessment encompassing physical, social, emotional, and spiritual pain, plays a crucial role in modern medicine. Despite its significance, integration with oncology and other healthcare specialties often occurs late in the disease trajectory. Strategies to bridge this gap include considering a “rebranding” of palliative care to “supportive care.” Early initiation of palliative care, although challenging to define precisely, aims to improve the quality of life for patients and their families. Studies show some benefits, but the evidence remains limited. An embedded model that encourages interdisciplinary collaboration between oncologists and palliative care practitioners has shown promise. However, it raises questions about training and availability of palliative care specialists. A broader approach involves integrating palliative care principles into medical and nursing education to ensure early recognition of patient needs and empathetic communication. Regular monitoring of patients’ physical and non-physical needs, along with appropriate interventions, can alleviate suffering and improve patient outcomes. Ultimately, the integration of palliative care into oncology and other disciplines focuses on addressing the individual’s needs and understanding their unique experience of suffering.
Keywords | Palliative care holds a unique place among medical specialties where it has gained recognition by virtue of its approach to the cancer: treating a patient with a tumor versus the tumor of the patient [ 1 ]. Palliative care goes beyond Quality-of-Life issues common to other specialties to focus on regular assessment of a patient’s physical, social, and spiritual pain, defined as TOTAL PAIN [ 2 ] by Cecily Saunders.
Scientific research in palliative care, academic teaching and post-graduate programs are conducted in independent palliative care units and departments. Nonetheless, its integration with other health care providers and oncologists appears difficult to achieve, until a patient becomes terminally ill or enters a very advanced stage of disease [ 3 , 4 ].
Strategies to facilitate integration between oncologists and palliative care teams are described below.
“REBRANDING” palliative care
Palliative care practitioners have suggested a change of name from “palliative care” to “supportive care” to overcome the stigma which some think is associated with the former. Attempts to define supportive care and palliative care as being synonymous spurred debate [ 5 – 9 ] and blurred the distinctions between the two, with the risk that palliative care would lose its identity. Jean Klastersky traced the history of supportive care from initial chemotherapy for acute myeloid leukemia, where supportive care predominantly involved blood products and management of febrile neutropenia, through to the advent of cisplatin for solid tumors in which control of nausea and vomiting became a priority [ 10 ], and ultimately the control of ocular, dermatological or endocrine toxicities.
If a name creates "discomfort" for patients and physicians, how then shall we call cancer, pain, end of life, death, and dying?
Upstream migration/earlier initiation
Some palliative care practitioners have suggested starting palliative care earlier during the cancer trajectory, i.e., defining what precisely is “early” about such care [ 11 ].
The rationale was that an earlier start would improve quality of life for patients and their families. The rebranding of palliative care could also be associated to an earlier referral, possibly because it would make the term more acceptable because no longer directly linked to end of life and dying.
Early palliative care entails empathetic communication with patients about their prognosis, symptom assessment and management, and advance care planning. Although some randomized controlled trials (RCTs) involving advanced cancer patients have reported higher quality of life scale scores and suggested a positive effect on survival in patients referred early to palliative care versus standard care, a Cochrane meta-analysis confirmed these results only with a low or a very low level of evidence [ 12 ].
In their recent meta-analysis and systematic review comparing the effects of early palliative care versus standard cancer care or on-demand palliative care on patients with incurable cancer, Huo et al. [ 13 ] reported that only 16 of 1376 studies were included. The pooled data suggested better quality of life, fewer symptoms, better mood, longer survival, and higher probability of dying at home for the early palliative care patients than for the control group. The evidence level was low, however, because of the high heterogeneity of quality-of-life measures and the few studies for the other results [ 13 ].
But what practically means “early palliative care”? Is it reasonable to ask for a palliative care consult at the diagnosis of cancer? Should “early” care be started at a particular “early” stage of disease or because patients’ needs have been assessed “earlier” during the disease trajectory?
Embedded model (location)
The embedded model foresees interdisciplinary collaboration between oncologists and palliative care practitioners in teamwork [ 14 – 17 ]. This would allow space for sharing clinical information about individual patients and for integrating specialist care. For example, in their retrospective, pre-/postintervention study involving patients with thoracic malignancies, Agne et al. [ 15 ] reported that after implementation of an embedded palliative care clinic, the number of referrals for palliative care rose whereas median waiting time between referral request and first visit and time between the first oncologic visit and completion of referral decreased. Such integration may foster collaboration between oncologists and palliative care practitioners, with the added benefit for patients of shorter waiting time to first encounter with the palliative care team . However, what is the viability of an embedded model that connects two disciplines that differ by objectives and by training? Are there sufficient palliative care practitioners that can join with other health care professionals to provide early and integrated services for treating patients with cancer? Furthermore, how do we want to make palliative care accessible to all who need it? How do we want to reduce health inequality and mitigate unnecessary suffering? We believe that embedding alone is not the solution.
A proposal
Our proposal is to focus on basic education in palliative care for all students during their medical/nursing education and continuous professional education for those involved in the routine care of patients with life-limiting disease or progressive chronic conditions [ 18 , 19 ].
The first step is to disseminate the screening for the need of palliative care, followed by regular monitoring of a patient’s physical symptoms, emotional, social, spiritual needs, and financial distress. This can be done using simple, validated tools self-reported by the patient. Patient-reported outcomes can then help to refine and adjust interventions to the patterns of suffering, including changes in therapeutic prescription or provision of spiritual, social, emotional, or financial support.
Second, empathic communication is paramount as much as pharmacological and non-pharmacological interventions implemented according to the evidence-based guidelines for tumor and related symptoms. For this reason, health care providers, whichever their field of interest, should learn soon to communicate with their patients. While restraints on time and resources are often cited as barriers to engaging in an empathetic approach, communication and assessment of suffering are an integral part of care, if not the care itself, to which health-care professionals are deontologically committed. Set within a broader medical education program, the basics of early palliative care can be learned and then extended to all patients or those with chronic or incurable disease starting from the initial encounter, if necessary. When needed, referral for consultation with a palliative care specialist and PC team may identify and address a patient’s physical and non-physical needs. Teaching early recognition of palliative care needs through validated screening tools and empathetic communication with patients and families, may both help to alleviate emotional and physical burdens granting that all needs are timely recognized and, in case, or refractory/severe suffering, properly referred to specialists.
In conclusion, we believe palliative care can be integrated with oncology or other disciplines by centering medicine around the needs of the person. This can be done through appropriate assessment and communication. Teaching how to screen and assess the suffering early during the course of disease and the importance of empathetic comunication should be done during the course of medical training, so as to spread those concepts to the broader audince possible. This may also help trainees and future doctors and nurses to better understand when is the right time to call for a specialist referral, while providing some primary palliative care. We think that this educational proposal may work better than other strategies to implement early palliative care.
Future research is necessary to evaluate the efficacy of our proposal. This would require a commitment for all doctors to approach the field of palliative care and the empathetic approach with the patient which will be an added value to their specialist clinical skills. | Author contribution
C.C. projected the manuscript and drafted the paper. C.I.R. overviewed the project and reviewed the manuscript.
Data availability
Not applicable.
Declarations
Ethical approval
Not applicable.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:34:57 | Support Care Cancer. 2024 Dec 19; 32(1):41 | oa_package/0c/87/PMC10728221.tar.gz |
|||||
PMC10728275 | 38110572 | Introduction
Head and neck cancer (HNC) represents the seventh most common cancer worldwide [ 1 ]. They account for different histologies (mainly squamous cell carcinoma but also salivary glands tumors, undifferentiated carcinoma, melanoma, lymphomas...) located in different head and neck subsites (oral cavity, pharyngeal axis, larynx, paranasal sinuses, and salivary glands).
Radiotherapy (RT) is a cornerstone treatment for cancers of the head and neck region [ 2 ]. It is indicated either as an exclusive treatment or in patients at high risk of local recurrence after surgery. The total dose of RT ranges from 45 to 70 Gy, administered mainly using a standard fractionation schedule (1.8–2.2 Gy/fraction, 1 fraction/day for 5 fractions/week). Platinum-based concurrent chemotherapy is indicated in patients with locally advanced stage tumors (stage III or IV according to NCCN guidelines) in the presence of pathological features such as positive surgical margins and/or extracapsular extension in the postoperative setting [ 3 ].
Radiation-induced oral mucositis (RIOM) is the most frequent and dose-limiting radiation-related side effect in the setting of patients treated with curative RT for HNC [ 4 , 5 ]. Pain and severe dysphagia due to RIOM may lead to significant patient weight loss with an overall worsening of his/her performance status. Most importantly, RIOM often causes the temporary interruption of the radiation course which has been demonstrated to decrease the efficacy of the radiation treatment [ 6 , 7 ].
Different scales for grading the severity of RIOM are currently used in clinical practice, as a guide for the radiation oncologist in undertaking preventive and therapeutic strategies [ 8 ]. For instance, the Radiation Therapy Oncology Group (RTOG)/European Organisation for Research and Treatment of Cancer scale considers either the anatomical changes of the oral mucosa (from grade 0 with no toxicity to grade 4 mucosal ulceration/hemorrhage and necrosis) or the level of pain reported by patients caused by RT [ 9 ]. Instead, the Common Terminology Criteria Adverse Event (CTCAE V5.0, November 27, 2017) scale distinguishes different cases, from asymptomatic ones (grade 1), which do not require any medical intervention, to more serious cases requiring urgent nutritional and/or medical intervention (grade 4) or leading to the death of the patient (grade 5).
Despite its clinical relevance, a standardized strategy for preventing and treating RIOM has not been defined yet [ 10 – 14 ]. Recommendations provided by cooperative group of experts have been published to guide the management of RIOM in daily clinical practice [ 15 – 17 ]. Nevertheless, to date, strategies applied to manage RIOM remain at institutional and/or personal levels according to internal guidelines and professional’s expertise.
Aim of the present work is to perform a real-life survey on how RIOM is managed among Italian radiation therapy centers. Moreover, whether the volume of treated patients could have an impact on the single-institution strategy has been also analyzed. | Materials and methods
On 11 May 2022, an online survey composed of a total of 53 questions, including both multiple-choice and open-ended ones, was administered through a personnel contact to radiation oncologists working in 25 different RT centers across Italy.
The survey was composed of 40 questions divided into three sections: (i) retrospective analysis of patients with HNC treated in 2021 in each center; (ii) strategies generally used for the prevention of RIOM; and (iii) strategies used for the treatment of RIOM in daily clinical practice. The full text of the survey is available in the supplementary materials section (supplementary material S1 ). All the participants gave their consent to the publication and the use of collected data for scientific purposes.
In the retrospective analysis, oncologic treatment characteristics (in terms of radiation technique and concurrent systemic treatments), overall RT treatment time, and treatment interruptions were collected.
Among general strategies applied to prevent and treat RIOM, data on institutional organization (professionals who manage RIOM, availability of dedicated nurses, and/or access to supportive care, nutrition, speech, and psychological services) and use of a standardized approach (RIOM data collection using validated scales and adherence to internal and/or published guidelines) has been also collected. The approach (prophylactic or therapeutic intervention) to artificial nutrition (both enteral and parenteral) was investigated.
All agents used in daily clinical practice both in the prevention and treatment phase of RIOM have been collected and gathered (when feasible).
Data will be presented as mean/median across the responders (section i) and counts (sections ii and iii). Moreover, an arbitrary cut-off of 50 treated patients/year has been used to define centers as “high-volume” (>50 patients/year) or “low-volume” (< 50 patients/years) centers. Results were divided accordingly to compare the two groups in terms of treatment strategies. | Results
All the 25 contacted RT centers responded to the survey and all sections have been completed. Dividing according to geographical location, seven centers are located in northern Italy, five in central Italy, and the remaining in southern Italy. | Discussion
Results of the present survey confirmed that a great variety exists among Italian centers in the management (prevention and treatment) of RIOM in the setting of HNC. Of note, the majority of participating centers are provided with different supportive care services and follow internal guidelines and/or literature recommendations. Moreover, a low number of patients (<15%) interrupted the RT treatment course and the mean overall treatment time (41 days) remains quite low.
Despite a large amount of literature data, few agents reached level 1 evidence (results coming from prospective randomized trials and/or meta-analysis) for the management of RIOM. In this scenario, recommendations derived from the consensus of experts and literature review have been published over the last decades. Since 2004, the Multinational Association of Supportive Care in Cancer and the International Society of Oral Oncology (MASCC/ISOO) cooperative group published their recommendations on the prevention and treatment of RIOM [ 18 – 21 ]. Similarly, the European Society of Medical Oncology (ESMO) has periodically published its recommendation since 2009 [ 15 , 22 ], while an Italian working group endorsed by Associazione Italiana di Radioterapia ed Oncologia Clinica (Gruppo AIRO Inter-regionale Lazio-Abruzzo-Molise) did it in 2019 [ 17 ].
Prevention of RIOM
Results of MASCC/ISOO, ESMO, and AIRO recommendations on the prevention of RIOM are summarized in Table 2 .
Although supported by low-level evidence of literature, pre-treatment dental evaluation, accurate oral hygiene, and sodium bicarbonate were recommended as standard of care for all adult patients candidates to RT for HNC [ 15 – 17 ]. Among the participating centers, two centers do not provide any recommendations to prevent RIOM. Of the remaining 19 (four centers used internal guidelines and details were not provided), sodium bicarbonate was the most frequently used agent (70% of the centers). Data on the use of saline solution were more controversial and were not considered robust enough by the ESMO panelists. In the present survey, only one center advised patients to use saline solution mouthwashes to prevent RIOM.
Benzydamine and low-energy laser (LEL) were recommended for the prevention of RIOM both in patients treated with RT alone and in those treated with chemoradiation [ 15 ]. Benzydamine is a non-steroidal anti-inflammatory drug with anesthetic, analgesic, and antiseptic properties. A randomized multicentric randomized double-blind placebo-controlled trial demonstrated the efficacy of benzydamine for RIOM prevention [ 23 ]. A total of 172 subjects (84 treated with benzydamine and 88 with placebo) were enrolled in 16 North American centers. Benzydamine oral rinse (1.5 mg/ml) or placebo was administered before, during RT, and for 2 weeks from the end of treatment. Results showed that benzydamine produced a 26.3% reduction of mucositis in area under curve compared to placebo ( p = 0.009). In particular, benzydamine produces a statistically significant benefit at high RT doses (range 25–37.5 Gy, p < 0.001, and range 37.5–50 Gy, p = 0.006) while it was not effective in patients treated with a slight hypofractionated schedule (>2.2 Gy/fraction). Moreover, 33% of patients treated with benzydamine remained free from ulcers compared to 18% of the placebo group ( p = 0.037). Subsequently, four more prospective studies confirmed the efficacy of benzydamine in preventing and reducing the severity of oral mucositis in patients treated with RT [ 24 – 27 ]. Despite literature evidence and recommendations, only three (12%) centers participating in the present survey stated to have benzydamine in their armamentarium to prevent RIOM.
LEL stimulates the biological responses to repair injuries in healthy tissues and is therefore included among the photobiomodulation therapies. A double-blind randomized trial (low-energy He-Ne laser vs placebo-light treatment) has been published in 1999 by Bensadoun et al. [ 28 ]. Thirty patients were enrolled and received a daily application with laser/placebo during the whole course of RT. Results showed that the mean grade of mucositis was significantly lower in patients treated with LEL compared to the control group (grade 1.7 ± 0.26 vs 2.1 ± 0.26, respectively, p = 0.01) with the highest differences observed during the last weeks (from 4th to 7th) of treatment. Moreover, the preventive use of the laser also allowed a significant reduction in oral pain ( p = 0.025). Subsequent studies confirmed the efficacy and safety of LEL in adult patients with HNC treated with RT [ 29 – 31 ]. A recent position paper published by the World Association of photobiomoduLation Therapy stated that literature evidence is robust enough for the clinical application of LEL to prevent oral mucositis as well as in other settings of treatment-induced toxicities [ 32 ]. The reported data led to the inclusion of LEL in all published recommendations and guidelines. Despite this, only one center involved in the present survey uses LEL in its clinical practice. On the contrary, some other products like mucoadhesive agents, chlorhexidine, and sucralfate are used by 10, 1, and 3 centers respectively although not supported by robust data. Of note, oral mucosa barrier and hyaluronic acid-based agents are routinely used by 43 and 26% of centers, respectively, despite not being mentioned within the above-cited published recommendations.
Treatment of RIOM
Results of MASCC/ISOO, ESMO, and AIRO recommendations on the treatment of RIOM are summarized in Table 3 .
With regard to strategies aiming to treat RIOM, only topical morphine resulted to be indicated to reduce oral pain. Its use is suggested by either MASCC/ISOO or ESMO recommendations. In a double-blind study, Sarvizadeh et al. randomized 30 patients to treat grade 3 mucositis with topical morphine or a magic mouthwash (magnesium aluminum, hydroxide, lidocaine, and diphenhydramine) for a period of 6 days. On the last day of treatment, mucositis resulted significantly lower in the study group compared to the control cohort ( p = 0.045) [ 33 ]. Similar results were obtained in a group of 26 patients randomly assigned to receive topical morphine or magic mouthwashes [ 34 ]. The duration of severe pain as well as pain intensity resulted lower in patients who received morphine compared to the control group. In the present survey, only one center prescribes topical morphine to treat any grade of mucositis, while other agents like mucoadhesive solution and chlorhexidine are more frequently used.
Hyaluronic acid-based agents are the most frequent products administered by respondents to the survey. A recent meta-analysis showed that it was beneficial for both cutaneous and mucosal radiation-induced side effects (RR: 0.14, 95% CI: 0.04 to 0.45) [ 35 ].
Overall treatment time
The occurrence of RIOM causes pain and dysphagia which produce patients’ weight loss and worsening of overall treatment compliance. This aspect may lead to interruptions of the RT course due to uncontrolled side effects. Worsening of oncological outcomes occurs in patients who did not complete the RT treatment course within the planned time. González Ferreira et al. [ 7 ] carried out a literature review and showed that delays in RT could produce an average loss of locoregional control ranging from 1.2% per day to 12–14% per week of interruption [ 7 ]. Moreover, it has been estimated that a daily dose increase of about 0.6–0.8 Gy would be required to compensate for each day of overall treatment time prolongation. The median value of the overall treatment time (considering both curative and postoperative treatments) reported by centers participating in the present survey resulted to be quite low (41 days, IQR: 35–45). Based on this finding, two main considerations can be done: (1) despite the wide variety of approaches both in preventing and treating RIOM, their impact on the overall treatment course seems to be low and (2) a network of supportive care services (including management of pain, nutrition, and psychological support as well as hospitalization if required) is provided by the majority of RT facilities and this aspect could have done a positive impact on patients’ compliance.
Volume of treated patients
The definition of “high” and “low” volume centers (in terms of hospital volume and/or professional experience) has not been established yet. Nevertheless, it has been demonstrated that the higher the number of patients treated, the better the oncological outcomes. Eskander et al. performed a systematic review of the literature and showed that high-volume hospitals (HR 0.886, 95% CI 0.820–0.956) achieved better results in terms of patients’ long-term survival [ 36 ]. In order to quantify the impact of the centers’ experience on the management of RIOM, we performed a subgroup analysis according to the number of treated patients/year using an arbitrary cut-off of 50 patients. Results showed that in high-volume centers, modern RT (namely IMRT) and standard high-dose chemotherapy (3-weekly CDDP) are more frequently used compared to low-volume centers. Similarly, the overall use of published RIOM-related recommendations and availability of supportive care services are slightly higher in high-volume centers for the majority of the considered parameters. Nevertheless, the mean overall treatment time resulted similar between the two groups as well as the number of patients who required a treatment interruption due to RIOM-related toxicity.
Limitations
Several criticisms burdened the present study. Centers participating in the present survey represent only 15% of Italian RT facilities [ 37 ]. Nevertheless, the spread of geographical distribution among south (28%), center (20%), and northern Italy (52%), as well as the variability of treated patients/year number (centers at high and low volume), ensure that reported results could be considered representative of how RIOM is nowadays managed among Italian centers in the daily clinical practice. Moreover, several other drugs and agents reported in the literature (i.e., anti-oxidant or immunonutrition) have not been considered in the present analysis. Finally, other factors than mucositis (such as patient’s age, comorbidities, treatment characteristics, and dysgeusia) could impact the patient’s compliance with the radiation treatment. | Conclusions
To the best of our knowledge, this is the first study reporting an accurate snapshot of the Italian attitude on which agents and drugs are currently used in daily clinical practice to prevent and treat RIOM in Italian RT facilities. Results showed that a great variety still exist despite the availability of national and international recommendations. In this scenario, whether different strategies to manage RIOM could impact patients’ compliance and overall treatment time of the radiation course is still unclear and requires further investigation. Moreover, the presented findings strongly encourage efforts to standardize the RIOM management protocols in daily clinical practice among the RT facilities. To this aim, similar analyses in other countries would be useful to highlight eventual geographical differences. | Aim
Radiation-induced oral mucositis (RIOM) is the most frequent side effect in head and neck cancer (HNC) patients treated with curative radiotherapy (RT). A standardized strategy for preventing and treating RIOM has not been defined. Aim of this study was to perform a real-life survey on RIOM management among Italian RT centers.
Methods
A 40-question survey was administered to 25 radiation oncologists working in 25 different RT centers across Italy.
Results
A total of 1554 HNC patients have been treated in the participating centers in 2021, the majority (median across the centers 91%) with curative intent. Median treatment time was 41 days, with a mean percentage of interruption due to toxicity of 14.5%. Eighty percent of responders provide written oral cavity hygiene recommendations. Regarding RIOM prevention, sodium bicarbonate mouthwashes, oral mucosa barrier agents, and hyaluronic acid-based mouthwashes were the most frequent topic agents used. Regarding RIOM treatment, 14 (56%) centers relied on literature evidence, while internal guidelines were available in 13 centers (44%). Grade (G)1 mucositis is mostly treated with sodium bicarbonate mouthwashes, oral mucosa barrier agents, and steroids, while hyaluronic acid-based agents, local anesthetics, and benzydamine were the most used in mucositis G2/G3. Steroids, painkillers, and anti-inflammatory drugs were the most frequent systemic agents used independently from the RIOM severity.
Conclusion
Great variety of strategies exist among Italian centers in RIOM management for HNC patients. Whether different strategies could impact patients’ compliance and overall treatment time of the radiation course is still unclear and needs further investigation.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00520-023-08185-5.
Keywords | Section 1: questionnaire
Retrospective analysis
In 2021, a total of 1554 patients with HNC were treated in the 25 centers participating (median 54, IQR: 20–70). The majority (median 91%) of the treatments had a curative intent (36% of them postoperative), while the others were administered for palliative intent. In most cases (mean 84%), patients underwent intensity-modulated RT (IMRT) technique. One center used a 3D RT technique for all patients, while the remaining 24 centers applied this technique on a median of 17% of patients.
Platinum-based chemotherapy was the most (71% of patients) frequently concurrent treatment, with a median of 43% and 29% of patients treated using a weekly and 3-weekly schedule, respectively. Cetuximab was used in 17 centers to treat a mean of 10% of patients.
The median value of median overall treatment time was 41 days (IQR: 35–45). The mean percentage of patients who interrupted treatment due to RT-related toxicity was 14.5% (data available for 19 centers). A median of 6% (IQR: 3.5–15) of patients required enteral nutrition.
Strategies for RIOM prevention
In almost all centers (96%), HNC patients are visited at least once a week during the RT course. Quality of life questionnaires were distributed to the patients in 16% of centers and the collection of pain data on a quantitative scale is provided in 80% of cases. In-treatment toxicity is collected systematically at least once a week (84% of facilities), using the CTCAE scale (38%), the RTOG scale (19%), or both (43%). In the case of concurrent chemotherapy, enteral nutrition is proposed only to patients with significant weight loss during the RT course and to all fragile patients in 48% and 20% of cases, respectively (20% in both situations). All but three centers (which never use parenteral nutrition) used similar criteria to select patients candidates for parenteral nutrition.
The majority of the centers (80%) provide patients with written oral cavity hygiene recommendations. Among 17 centers (data not available for three centers), accurate daily oral cavity cleaning (52%), use of mucosal barrier agents (47%), and pre-treatment dentistry evaluation (35%) were the most frequent recommendations.
About half of the centers (52%) use internal guidelines for RIOM prevention, and 15 centers referred to literature evidence and/or expert recommendations (60% of recommendations provided by the Associazione Italiana di Radioterapia ed Oncologia Clinica — AIRO). Eighteen (72%) centers provided written general recommendations for RIOM prevention. Data from 14 centers (data not available for four centers) accurate oral cavity cleaning (60%), oral mouthwashes with bicarbonate (47%), and pre-treatment dentistry evaluation (40%) were the most frequent pieces of advice.
All but four centers also suggest the use of topic and/or systemic agents to prevent RIOM (Fig. 1 ).
Six centers produce galenic products (a mixture of different agents) produced by their pharmacy.
Strategies for RIOM treatment
The radiation oncologist manages the acute RIOM toxicity in all but one center: medical oncologists and pain specialists also support patient care in 12 (48%) and eight centers, respectively. In 16 (64%) centers, hospitalization for supportive care is allowed. Moreover, different services contribute to taking care of patients during the radiation treatment course: 13 (52%) centers have a nurse dedicated to HNC patients, 18 (72%) centers have a supportive therapy service for pain management, 22 (88%) centers have nutritional consultants, 14 (56%) centers have a speech therapy service for the management of mechanical dysphagia while 18 (72%) have a psycho-oncology service as well.
Fourteen (56%) centers base treatments of RIOM on literature evidence, while internal guidelines are present in 13 centers (44%). Eight centers (32%) did not follow either internal guidelines or literature data. Galenic agents were produced by pharmacies of seven institutions.
The frequency of topic and systemic agents used to treat RIOM is reported in Figs. 2 and 3 , respectively.
The frequency of topic and systemic agents used to treat RIOM according to the increasing grade of mucositis (from G1 to G3) is reported in Fig. 4 and Fig. 5 , respectively.
Volume of treated patients
To investigate the impact of the “volume of patients” (number of patients treated per year) on the management of RIOM, we classified centers as “low-volume” (< 50 patients/years) and “high-volume” (> 50 patients/years). Twelve and 13 centers gathered low- and high-volume cohorts, respectively. The mean number of patients treated in each group was 21 (IQR: 10–25) and 96 (IQR: 66–136) for low- and high-volume centers, respectively.
Differences in terms of RT technique, concurrent systemic agents, center supportive network, and prevention/treatment strategies between high- and low-volume centers are reported in Table 1 .
The mean value of the overall treatment time resulted to be 42 (IQR: 38–45) and 40 (IQR: 30–45) days in low- and high-volume centers, respectively. The percentage of patients who interrupted the RT treatment according to low- and high-volume classification was 16% and 13%, respectively.
Supplementary information
| Abbreviations
area under curve
head and neck cancer
intensity-modulated radiotherapy
non-steroidal anti-inflammatory drug
low-energy laser
radiation-induced oral mucositis
radiotherapy
quality of life
Acknowledgements
IEO, the European Institute of Oncology, is partially supported by the Italian Ministry of Health (“Ricerca Corrente” and “5x1000” funds). The Division of Radiation Oncology of IEO received research funding from AIRC (Italian Association for Cancer Research) and Fondazione IEO-CCM (Istituto Europeo di Oncologia-Centro Cardiologico Monzino) all outside the current project.
LB received a grant by the European Institute of Oncology-Cardiologic Center Monzino Foundation (FIEO-CCM), outside the current study. MGV was supported by a research fellowship from the Associazione Italiana per la Ricerca sul Cancro (AIRC) entitled “Radioablation ± hormonotherapy for prostate cancer oligo-recurrences (RADIOSA trial): potential of imaging and biology” registered at ClinicalTrials.gov NCT03940235 (accessed on 20th December 2022).
Author contribution
LB, MP: data collection, methodology, writing original draft; MGV: data analysis, visualization, writing original draft; MZ: data analysis, writing original draft; DA: conceptualization, methodology, writing original draft, supervision. All the remaining authors participated in the survey as responders. All authors had full access to the final version of the manuscript.
Declarations
Ethical approval
Given the investigative nature of the study (survey), no ethical approval was required. The study did not involve human or animal participants.
Conflict of interest
The Division of Radiation Oncology of IEO received institutional grants from Accuray Inc. and Ion Beam Applications (IBA). Alterio D. received an uncoditional financial support by Medizioni srl for the current study. | CC BY | no | 2024-01-16 23:35:02 | Support Care Cancer. 2024 Dec 19; 32(1):38 | oa_package/78/81/PMC10728275.tar.gz |
PMC10731717 | 38124066 | Introduction
Osteoarthritis (OA), the most prevalent joint disease, stands as a primary cause of joint pain and disability [ 1 ]. Featured by comprehensive joint lesions including cartilage degradation, synovial inflammation, osteophyte formation and subchondral bone sclerosis, OA significantly impacts life quality, labor ability, and life expectancy. Clinically, the knee is the most common site of OA, followed by the hand and hip. The prevalence of symptomatic knee OA exceeds 10%, with a lifetime risk ranging from 14 to 45% [ 2 ]. While limited evidence suggests potential structural modification through pharmacological therapies, synchronous symptom benefits remain elusive [ 3 ]. Consequently, OA constitutes a substantial and escalating health burden, posing remarkable implications for patients, healthcare systems, and broader socioeconomic costs [ 4 , 5 ].
In recent decades, the hypothesis has been widely accepted that OA pathogenesis starts from injury and consequent degradation of cartilage. However, emerging evidence highlights synovial inflammation as a pivotal process [ 6 ]. Synovitis, as indicated by magnetic resonance imaging (MRI), is associated with OA symptoms [ 7 ]. Synovial inflammation arises from the immune response against damage-associated molecular patterns (DAMPs) or alarmins, primarily high-mobility group box-1 (HMGB1) and proteins from the S100 family [ 8 ]. The synovium, predominantly composed of macrophages and fibroblast-like synoviocytes (FLS), underscores the crucial role of synovial macrophage in orchestrating inflammatory processes during OA pathogenesis [ 9 – 11 ].
HMGB1, a prominent DAMP in OA pathogenesis, is released into the synovial fluid by senescent or necrotic cells [ 12 ]. HMGB1 stimulates synovial macrophages through pattern recognition receptors (PRRs), notably toll-like receptor 4 (TLR4), activating downstream inflammatory signaling pathways, including NF-κB and MAPK. In response to HMGB1, synovial macrophages release inflammatory factors, such as tumor necrosis factor α (TNF-α), initiating synovial inflammation [ 13 ]. Nucleotide-binding oligomerization domain containing 2 (NOD2), a member of PRRs alongside TLR4, is expressed in the cytosol of immune cells, including myeloid cells, monocytes, macrophages, and dendritic cells. NOD2 has been reported to be associated with Crohn’s disease and plays important roles in microbe sensing and host response [ 14 ]. Recent findings indicate the inhibitory influence of NOD2 on TLR signaling pathways in colorectal tumorigenesis [ 15 ]. However, it remains unknown whether NOD2 has an influence on the pathogenesis of OA.
This study provides compelling evidence for the pivotal role of NOD2 in OA pathogenesis, demonstrating its capacity to mitigate osteoarthritis by attenuating HMGB1-induced synovial macrophage activation. Using collagenase-induced osteoarthritis (CIOA) model in mice, where synovial hyperplasia and the effect of macrophages were more pronounced with significant synovial activation [ 9 ], our study elucidates the mechanistic role of NOD2 in orchestrating the activation of synovial macrophages, and highlights NOD2 as a potential target for OA prevention and treatment. | Materials and methods
Human synovial tissue
Human synovial tissue was obtained from 13 patients in the Department of Orthopaedic Surgery, at Sun Yat-sen Memorial Hospital, Sun Yat-sen University. The patients underwent arthroscopic surgery or joint replacement of the knee, with ten diagnosed with osteoarthritis. Ethical considerations precluded obtaining synovial membrane samples from completely healthy individuals. To serve as control, synovial membrane samples from patients with acute trauma, devoid of chronic inflammation, were utilized, as supported by relevant literature [ 16 , 17 ]. Written informed consents were obtained before surgery, with approval from the Ethics Committee of Sun Yat-sen Memorial Hospital (Approval No. SYSEC-KY-KS-2021–243). The procedures were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Declaration of Helsinki. Demographic characteristics are listed in Supplementary Table 1 .
Experimental mouse model
Healthy male C57BL/6 J mice (8 weeks old) were obtained from the Animal Laboratory of Sun Yat-sen University. Randomized using a random number table, they were divided into 4 groups, each with 6 mice, 24 mice in total. Sample size determination was based on our previous experiments [ 18 ]. Mice were housed in a specific pathogen-free (SPF) animal care facility, with accessible food and water. Osteoarthritis was induced in the CIOA group by intra-articular injection of collagenase VII (1U) into the right knee joint, twice on alternate days [ 16 ]. In the CIOA + Mock/ + NOD2 overexpression (oe-NOD2) group, mock lentiviruses or lentiviruses overexpressing NOD2 were injected into the articular cavity 1 week after collagenase injection [ 19 ]. Osteoarthritis Research Society International (OARSI) score, osteophyte formation, and expression of specific proteins were observed. All protocols and experiments were approved by the Institutional Animal Care and Use Committee of Sun Yat-sen University (Approval No. 20220223), in accordance with the animal care guidelines and the 3Rs principle (replacement, refinement, and reduction). To be explicit, confounders were not controlled.
Immunohistochemical (IHC) staining
Synovial tissue was fixed in 4% paraformaldehyde (PFA), transparentized with xylene and embedded in paraffin at 54 °C for sectioning (section thickness: 3 μm). For knee specimens of mice, 10% ethylenediaminetetraacetic acid (EDTA) (pH 7.4) was used for decalcification for 30 days, embedding in paraffin and sectioning.
Sections underwent deparaffinization and rehydration using xylene and ethanol with gradient concentrations. Antigen retrieval was achieved by pepsin, followed by immersion in 3% H 2 O 2 for 20 min to eliminate endogenous peroxidase activity, and blocking with 5% bovine serum albumin (BSA) for 1 h. Tissue sections were incubated with diluted primary antibodies for 2 h at 37 °C, followed by incubation with diluted secondary antibodies labeled with horseradish peroxidase (HRP) for 30 min. Visualization with 3,3′-diaminobenzidine (DAB) and nuclear staining with hematoxylin were performed, and sections were sealed with gum and observed using a biomicroscope (DM2000, Leica).
Immunofluorescent staining
Tissue sections underwent the same processing as described in the “Immunohistochemical (IHC) staining” section. For macrophages, cells were seeded in confocal plates (BDD011035, Jet Bio-Filtration) and stimulated with HMGB1 at various time points after 24 h. The cells were fixed in 4% PFA for 15 min, incubated in 0.1% Triton X-100 at 20 °C for 15 min, and then gently shaken before blocking with 1% BSA. Following these steps, the sections or cells were subjected to overnight incubation with primary antibodies at 4 °C, and subsequent incubation with secondary antibodies at 20 °C for 1 h. Nuclear staining was achieved using 4′,6-diamidino-2-phenylindole (DAPI), and fluorescence was observed using a confocal microscope (LSM 710, Carl Zeiss).
Cell preparation of macrophages and fibroblasts
Femurs and tibias from 8-week-old male C57BL/6 J mice were isolated for bone marrow-derived macrophages (BMDMs) extraction in sterile environment. BMDMs were cultured with macrophage colony-stimulating factor (M-CSF) (51,112-MNAH, Sino Biological Inc) for 7 days before further experiments.
BMDMs were cultured in high-glucose Dulbecco’s modified Eagle’s medium (DMEM) with 10% fetal bovine serum (FBS), supplemented with 2% penicillin and streptomycin, at 37 °C in 5% CO 2 . NIH3T3 fibroblasts (CL-0171, Procell) were cultured similarly. At a confluence of approximately 70–80%, cells were stimulated with recombinant human HMGB1 and/or muramyl dipeptide (MDP) at a concentration of 1 μg/ml. Proteasomal degradation was blocked by MG132 (12.5 μM) incubation for 2 h before stimulants were added.
To assess the influence of macrophages on fibroblasts and chondrocytes, macrophages were stimulated with HMGB1 (1 μg/ml) for 24 h as well as macrophages transfected with mock or oe-NOD2 lentiviruses. After centrifugation to remove cell debris, supernatants were applied to fibroblasts or chondrocytes, with the supernatants of unstimulated macrophages as negative control.
Isolation and culture of chondrocytes
Articular surfaces of femur and tibia were isolated from 4-week-old male Sprague–Dawley rats from the Animal Laboratory of Sun Yat-sen University. Chondrocytes were isolated via incubation with 0.25% trypsin at 37 °C for 20 min and collagenase II at 37 °C for 30 min. The isolated chondrocytes from filtration and centrifugation were resuspended in DMEM/F-12 supplemented with 10% FBS and 2% penicillin and streptomycin.
Real-time PCR
Total RNA was obtained using the RNAiso Plus reagent kit, and its concentration was determined by NanoDropTM 2000 spectrophotometer (Thermo Fisher Scientific). RNA solution was mixed with the PrimeScriptTM RT Master Mix reagent kit for reverse transcription, and the cDNA solution was further mixed with UNICONTM qPCR SYBR® Green Master Mix and corresponding primers for real-time polymerase chain reaction (PCR) on LightCycler® 96 Real-Time PCR System (Roche). 2 −ΔΔCt formula was applied for the quantitative analysis of gene expression. Primer sequences for PCR and information on reagents in this study were demonstrated in Supplementary Table 2 and Supplementary Table 3 , respectively.
Western blotting
Cells were treated with lysate composed of radioimmunoprecipitation assay (RIPA) buffer, phenylmethylsulphonyl fluoride (PMSF), and phosphatase/protease inhibitor cocktail. Protein concentration was measured using a bicinchoninic acid (BCA) assay kit, and approximately 20 μg of protein was applied to sodium dodecyl-sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to a polyvinylidene difluoride (PVDF) membrane. The membrane was sequentially immersed in 5% BSA at room temperature for 1 h, primary antibody at 4 °C overnight, and then HRP-conjugated secondary antibody at room temperature for 1.5 h. Super ECL Detection Reagent was applied to the membrane, and immunoblotting data were obtained via the digital imaging system (G:BOX Chemi XT4, Syngene). Semi-quantitative analysis of western blotting was performed using ImageJ software (Release 1.53p).
ELISA
Supernatants from the cell culture system were collected, and cell debris was removed via centrifugation. TNF-α levels were measured with an enzyme-linked immunosorbent assay (ELISA) kit, according to the producer’s instructions.
siRNA and lentiviruses
To genetically modify the expression of NOD2 in macrophages, small interfering RNA (siRNA) sequence targeting mouse NOD2 (Gene ID: 257,632) (NOD2-siRNA) was designed and synthesized by GenePharma. Negative control siRNA (NC-siRNA) served as control. Macrophages were transfected with NOD2-siRNA with the assistance of LipofectamineTM RNAiMAX, after starvation in serum-free OPTI-MEMTM for 1 h. Six hours after transfection, the medium was replaced by DMEM with 10% FBS.
Recombinant lentivirus targeting mouse NOD2 (NOD2-LV) was constructed by Cyagen, with mock as the control. Macrophages at a confluence of approximately 30% were incubated with lentiviruses at 37 °C overnight, and subjected to fluorescent cell sorting, to obtain stable transfected macrophages. Similar methods were used for targeted inhibition or overexpression of NLRP12 in macrophages. Sequences of siRNAs were demonstrated in Supplementary Table 4 .
Flow cytometry
Macrophages were stimulated, harvested, and isolated by centrifugation, followed by incubation with eBioscienceTM IC Fixation Buffer at 20 °C for 20 min, and later with Perm/Wash Buffer to increase permeability. Macrophages were then incubated with APC-conjugated CD206 antibody, or PE-Cyanine7-conjugated iNOS antibody, for 20 min at 20 °C. Further analysis of macrophages was performed with the assistance of flow cytometry instruments (FACSVerseTM, BD Biosciences).
RNA-seq
Macrophages were stimulated with HMGB1 for 4 h and lysed using the TRIzolTM Reagent to obtain total RNA. After removing rRNAs, mRNAs and ncRNAs were retained for strand-specific construction and sequencing. RNA fragments were reversely transcribed into cDNAs, and PCR was applied for amplification. Illumina HiSeq 4000 was employed for sequencing, and raw reads containing the adapter or demonstrating low quality ( Q -value ≤ 20) were removed. The remaining reads were mapped to the reference genome of mouse (GRCm38). Differentially expressed genes (DEGs) were identified when FDR < 0.05 and |log 2 FC|> 1.
Micro-CT
Whole knee joints were fixed in 4% PFA and assessed using micro-computed tomography (micro-CT) with an imaging system (ZKKS-MCT-Sharp, Zhongke Kaisheng Medical Technology). Radiographic parameters were set at 70 kV and 100 μA within 100 ms (section thickness 10 μm) to obtain optimal projections. The region of interest (ROI) was designated from the images and processed with ZZKS-MicroCT4.1 software, to obtain qualitative data on the quantity and volume of osteophytes around the knee joints.
Histological assessment
Safranin O/fast green staining was conducted, and a scoring system developed by OARSI was adopted for semi-qualitative analysis of knee joints, considering the extent and depth of cartilage loss to assess the extent and severity of cartilage lesions. These assessments were performed by PDG and THL, who were blinded to group assignment.
Quantification of IHC staining was performed using the ImageJ software (National Institutes of Health, Bethesda, Maryland), with IHC Profiler plugin of ImageJ. Randomly selected fields of IHC images were divided into four groups based on pixel intensity values: high positive, positive, low positive, and negative. The following formula was used to calculate the IHC optical density score (ODS) for the IHC images. ODS = (high positive [%] × 4 + positive [%] × 3 + low positive [%] × 2 + negative [%] × 1)/100, where % represents the percentage contribution.
Migration and invasion
To evaluate the influence of macrophages on fibroblasts, wound healing, migration and invasion of NIH3T3 fibroblasts were assessed by scratch assay and transwell assay, respectively. Macrophages were stimulated for 24 h with 1 μg/ml HMGB1 as well as macrophages transfected with mock or oe-NOD2 lentiviruses. Supernatants were collected and centrifuged to discard cell debris. Supernatants of unstimulated macrophages were collected as control. The NIH3T3 monolayer was scratched by a 200-μl pipette tip in a uniform pattern, washed with PBS to remove detached cells, and then incubated in the supernatants for 24 h. Images were acquired for quantitative analysis using the ImageJ software (Release 1.53p), and the area where no cells were attached was calculated. The migration rate was expressed as (1 − final area/initial area) × 100%.
Transwell® with 8.0 μm pores (3422, Corning Incorporated) coated with Matrigel® were used to assess invasion of NIH3T3 fibroblasts. Cells were seeded in the upper chambers with serum-free medium, and supernatants were added to the lower chambers. After 48 h of incubation, cells in the upper chambers were wiped off with a cotton swab. The cells on the opposite side of the upper chambers were fixed with 4% PFA and stained with solution of crystal violet. The quantity of stained cells was calculated as the average number among three randomly chosen areas.
Statistics
The normality of data was tested using the Shapiro–Wilk test. For non-parametric analyses, the Wilcoxon test was performed for comparison between two groups, and the Kruskal–Wallis test followed by Dunn’s multiple comparisons test was used for multi-group comparisons. Sample medians and interquartile ranges were presented. For parametric analyses, Student’s t test was performed for comparison between two groups, and analysis of variance (ANOVA) with Bonferroni’s correction was used for multi-group comparison. Data were presented as means and standard errors of the mean (s.e.m.). All analyses were performed using the GraphPad Prism 6 software (GraphPad Software). P < 0.05 was considered as significant. | Results
NOD2 is upregulated in synovial macrophages of osteoarthritis patients and HMGB1-stimulated macrophages
To investigate potential molecules orchestrating macrophage activation, we subjected RAW264.7 macrophages to HMGB1 stimulation for 4 h, to capture mRNA profiles through RNA sequencing. DEGs were identified (|log 2 FC|> 1, FDR < 0.05), and the results were visually represented through heat map and cluster analysis (Fig. 1 A, B). Protein–protein interaction network (PPIN) based on the STRING database unveiled a close association between NOD2 and TLR4 pathway activation in macrophages (Fig. 1 C).
We further collected synovial tissue samples from osteoarthritis patients and healthy controls, for IHC staining, with demographic characteristics outlined in Supplementary Table 1 . In accordance with the RNA-seq, the expression of NOD2 exhibited a substantial increase in osteoarthritic synovial tissue, compared to healthy synovial tissue. Concurrently, the well-known proinflammatory cytokine TNF-α, primarily secreted by activated macrophages, demonstrated an upregulation in osteoarthritic synovial tissue (Fig. 1 D). Quantitative Bresalier’s analysis confirmed elevation of NOD2 and TNF-α level, by comparing the average intensity scores between the two groups (Fig. 1 E). To explore the spatial distribution of NOD2 within synovial tissue, immunofluorescent staining was employed. F4/80, a surface marker of macrophages, exhibited elevated expression in osteoarthritic synovial tissue, mirroring the pattern of NOD2. Remarkably, the green fluorescence indicative of NOD2 showed consistent co-localization with the red fluorescence of F4/80, signifying macrophages as the principal residence of NOD2 in osteoarthritic synovial tissue (Fig. 1 F). Immunofluorescent staining further indicated a shift in the M1/M2 ratio favoring M1 subtype transformation in osteoarthritic synovial tissue, compared with healthy controls (Fig. 1 G, H). Validation through real-time PCR and Western blotting of synovial tissue corroborated elevated expression of NOD2 and TNF-α in osteoarthritis patients (Fig. 1 I, J, Supplementary Fig. 1 A, B). Primer sequences for PCR and information on reagents utilized in this study are detailed in Supplementary Table 2 and Supplementary Table 3 , respectively.
HMGB1 promotes NOD2 expression and macrophage activation
IHC staining of synovial tissue unveiled an elevated expression of HMGB1 in osteoarthritis patients, particularly in cases classified as radiographically severe osteoarthritis (Kellgren/Lawrence classification > 2) (Fig. 2 A). This observation was substantiated by comparing the average intensity scores across different groups (Fig. 2 B). Subsequently, we investigated the impact of HMGB1 stimulation on the expression of NOD2 and TNF-α, an inflammatory cytokine, in macrophages at various time intervals. HMGB1 stimulation at gradient concentrations ranging from 0.1 μg/ml to 1.0 μg/ml significantly upregulated NOD2 mRNA expression, with a more pronounced effected noted at higher concentrations of HMGB1 (Fig. 2 C). Consistently, real-time PCR depicted an increase in TNF-α expression at mRNA level following HMGB1 stimulation (Fig. 2 D). This elevation was further validated at protein level by enzyme-linked immunosorbent assay (ELISA) assay and immunofluorescent staining (Fig. 2 E, F). Immunofluorescent staining additionally illustrated the translocation of p65 (red) from the cytoplasm into the nucleus (blue) in macrophages upon HMGB1 stimulation (Fig. 2 G).
Indeed, HMGB1 stimulation not only led to a significant increase in the protein expression of NOD2 but also induced elevated levels of p-IKKβ, p-p65, p-JNK, and p-ERK, indicative of a comprehensive activation of the NF-κB and MAPK pathways (Fig. 2 H, Supplementary Fig. 1 C-G). Activated macrophages typically undergo polarization into either M1 or M2 subtype. By employing biomarkers such as iNOS (characteristic of M1 subtype) and CD206 (characteristic of M2 subtype), flow cytometry indicated that HMGB1 induced M1 polarization of macrophages (Fig. 2 I).
NOD2 modulates macrophage activation induced by HMGB1
To elucidate the influence of NOD2 on macrophage activation induced by HMGB1, we employed siRNA targeting NOD2 (si-NOD2) in macrophages. Detailed designation, synthesis, and construction of si-NOD2 have been described in our previous study [ 20 ], and the sequences are provided in Supplementary Table 4 , with verification data presented in Supplementary Fig. 4 . Taking it into consideration that TLR4 is the predominant receptor for HMGB1, we applied TAK-242, an inhibitor of the TLR4 pathway, to explore potential interaction between NOD2 and TLR4. Real-time PCR confirmed significant reduction in NOD2 expression in macrophages transfected with si-NOD2. Intriguingly, TLR4 pathway inhibition also led to a decrease in NOD2 mRNA expression, highlighting the pivotal role of TLR4 in macrophage NOD2 upregulation in response to HMGB1 (Fig. 3 A). As expected, TLR4 inhibition resulted in a reduction in TNF-α expression at both mRNA and protein levels. Notably, TNF-α expression in NOD2 knock-down macrophages was significantly higher than the negative control (NC) group, suggesting an inhibitory role of NOD2 in HMGB1-induced macrophage activation (Fig. 3 B, C). This was further confirmed by Western blotting, where TAK-242 attenuated NOD2 regulation, and NOD2 knock-down resulted in a more significant activation of the NF-κB and MAPK pathways (Fig. 3 D, Supplementary Fig. 2 A-E).
Next, we further explore the regulatory effect of NOD2 on macrophage activation, by constructing recombinant lentivirus carrying sequences that induced NOD2 overexpression, as detailed in our previous study [ 20 ]. Macrophages overexpressing NOD2 demonstrated lower TNF-α expression, both at mRNA and protein levels, in response to HMGB1 stimulation (Fig. 3 E, F). This was corroborated by immunofluorescent staining (Fig. 3 G). Notably, MDP, a potent activator of NOD2, failed to inhibit TNF-α expression (Supplementary Fig. 2 F, G). Furthermore, NOD2 overexpression in macrophages impeded the translocation of p65 (red) from the cytoplasm into the nucleus (blue) induced by HMGB1 (Fig. 3 H). Moreover, NOD2 overexpression reduced the protein levels of p-IKKβ, p-p65, p-JNK, and p-ERK in response to HMGB1 (Fig. 3 I, Supplementary Fig. 2 H-L) and dampened M1 polarization of macrophages (Fig. 3 J).
NOD2 overexpression attenuates the pro-inflammatory paracrine effect of macrophages on FLS and chondrocytes
To unravel the paracrine effects of NOD2-overexpressing macrophages on FLS and chondrocytes, macrophages were exposed to 1 μg/ml HMGB1 for 24 h, as well as macrophages transfected with mock or oe-NOD2 lentiviruses, with unstimulated macrophages as negative control. Supernatants were collected, centrifuged to eliminate cell debris, and subsequently applied to fibroblasts (Fig. 4 A–G) and chondrocytes (Fig. 4 H–I). Compared to the untransfected and mock groups, real-time PCR demonstrated lower mRNA level of TNF-α in fibroblasts of the oe-NOD2 group (Fig. 4 A). This aligns with the results from Western blotting, where subdued activation of the NF-κB and MAPK pathways was observed in the oe-NOD2 group (Fig. 4 B, Supplementary Fig. 3 A). Notably, fibroblasts developed polarized formation of lamellipodia, which was marked by p-FAK, and indicative of a proinflammatory phenotype with invading capability. Conversely, these features were absent in the oe-NOD2 group (Fig. 4 C). Migration and invasion of fibroblasts, assessed by scratch assay and transwell assay, were also impaired in the oe-NOD2 group (Fig. 4 D–G).
Moreover, the overexpression of NOD2 in macrophages significantly enhanced the expression of anabolic factors in chondrocyte metabolism, including COL2A1 (or COL2 protein), SOX9, and aggrecan. Simultaneously, the expression of catabolic factors in chondrocyte metabolism, such as MMP3, MMP13, ADAMTS4, and ADAMTS5, was downregulated (Fig. 4 H–I, Supplementary Fig. 3 B).
NOD2 alleviates pathological changes in mouse osteoarthritis model
To assess the impact of NOD2 in osteoarthritis, we constructed CIOA mouse model, with detailed grouping outlined in the “ Materials and methods ” section. Data from 6/6 mice were included in each analysis. Intra-articular injection of collagenase VII induced cartilage lesions in the knee joint of mouse, as depicted by Safranin O/fast green staining (Fig. 5 A). Lentiviruses yielding overexpression effect of NOD2 significantly preserved articular cartilage, as evidenced by OARSI score (Fig. 5 D). Micro-CT scanning and 3D reconstruction unveiled higher quantity and volume of peri-articular osteophytes in the CIOA (0.848mm 3 , 95% CI 0.760–0.935 mm 3 ) and CIOA + Mock group (0.885mm 3 , 95% CI 0.795–0.975 mm 3 ), compared with the Ctrl group (0.539mm 3 , 95% CI 0.494–0.583 mm 3 ). However, intra-articular injection of lentiviruses overexpressing NOD2 alleviated osteophyte formation in the CIOA + oe-NOD2 group (0.622mm 3 , 95% CI 0.547–0.698 mm 3 ) (Fig. 5 B, E, F).
IHC staining revealed higher expression of HMGB1 in mouse synovial tissue of the CIOA group, compared to the Ctrl group, consistent with the findings in human. Notably, the CIOA + oe-NOD2 group showed impaired elevation of HMGB1, suggesting a negative feedback mechanism on HMGB1 release by NOD2 overexpression (Fig. 5 C, G). Further insights into the effect of NOD2 on the pathological process of CIOA in mice were obtained from IHC staining of knee joints. Expression of NOD2 in the synovial tissue was upregulated in the CIOA group and expectedly higher in the CIOA + oe-NOD2 group (Fig. 5 C, H). Concurrently, the elevation of TNF-α observed in CIOA was partially dampened by NOD2 overexpression (Fig. 5 C, I), indicating an inhibitory effect of NOD2 on the inflammatory process. Moreover, the predominant proteoglycan in articular cartilage, aggrecan, was partially protected by NOD2 overexpression from significant loss in CIOA (Fig. 5 C, J ). Conversely, NOD2 overexpression exerted an inhibitory effect on MMP-13, a vital catabolic factor involved in cartilage degradation (Fig. 5 C, K).
Further exploration of the underlying mechanism by which overexpression of NOD2 alleviated osteoarthritis in mice involved immunofluorescent staining. Both F4/80 and iNOS in synovial tissue were elevated in CIOA, with a more significant increase observed in iNOS, indicating that M1 macrophages were the dominant phenotype in OA. Interestingly, NOD2 overexpression more significantly retarded the increase in iNOS compared to F4/80, suggesting an association between interrupted M1 phenotype transition and the alleviation of osteoarthritis by NOD2 overexpression (Fig. 5 L, M). | Discussion
This study focused on the pivotal role of NOD2 in the pathogenesis of OA. NOD2 as a differentially expressed gene in activated macrophages, as suggested by RNA sequencing, was further corroborated by IHC staining of synovial tissue sections. Elevated in response to HMGB1 stimulation, NOD2 demonstrates a negative impact on macrophage activation and the release of inflammatory cytokines. Subsequent coculture with supernatants from genetically modified macrophages induces phenotypic shift in FLSs and chondrocytes, implicating NOD2 as a pivotal factor that empowers synovial macrophages to orchestrate the inflammatory processes during OA pathogenesis. Furthermore, in vivo overexpression of NOD2 via lentivirus injection significantly alleviates the severity of osteoarthritis in mice. These findings shed light on a novel regulatory element in OA pathogenesis and suggest NOD2 as a potential preventive and therapeutic target.
Recent clinical studies have identified synovial inflammation as a characteristic feature in OA development and progression [ 7 , 21 ]. Synovial inflammation is detected throughout the entire process of OA development [ 22 ], preceding other pathological changes of OA, as a predictive marker before the onset of OA. Macrophages are profoundly involved in various diseases including developmental, inflammatory, tumoral, and degenerative diseases [ 23 , 24 ]. Direct in vivo evidence of macrophage involvement in human osteoarthritis has been validated, by revealing the fact that activated, not resting macrophages, were recruited in 76% of OA knees [ 24 ]. Therefore, understanding the regulatory mechanism of macrophage activation is crucial in OA pathogenesis. In this study, murine BMDMs were employed, utilizing RNA sequencing to identify differentially expressed genes in activated macrophages compared to resting macrophages. RMA sequencing, coupled with protein–protein interaction analysis, ultimately led to the focused on NOD2.
NOD2, also known as caspase recruitment domain-containing protein 15 (CARD15), is a member of the NOD-like receptor (NLR) family and thus named NLR with a CARD2 (NLRC2). NOD2 comprises C-terminal leucine-rich repeats (LRR), intermediate nucleotide-binding-domain (NACHT), and N-terminal CARD [ 25 ]. Polymorphisms of NOD2 have been associated with Crohn’s disease, an inflammatory bowel disease, and Blau syndrome, an autoinflammatory condition. Further research has revealed that NOD2 is essential for bacterial sensing by specifically recognizing MDP from bacteria, subsequently activating immune response, including inflammatory signaling pathways such as NF-κB and MAPK [ 26 ]. Consistent with prior studies, NOD2 is upregulated in response to MDP stimulation and contributes to macrophage activation and the release of inflammatory cytokines [ 27 ]. Notably, this study demonstrates that NOD2 expression also rises in response to HMGB1, one of the most relevant DAMPs in OA pathogenesis. Besides, preconditioning with TLR4 inhibitor impairs the upregulation of NOD2, implicating the indispensable role of the TLR4 signal in HMGB1-induced NOD2 upregulation. However, the specific mechanism underlying NOD2 upregulation via the TLR4 signal remains to be elucidated.
Intriguingly, in lentivirus-transfected macrophages overexpressing NOD2, we observed an unexpected attenuation of the inflammatory response against HMGB1, while the activator of NOD2, MDP, failed to exert an inhibitory effect on the inflammatory response. These seemingly contradictory findings point to the dual effects of NOD2 on HMGB1/TLR4 signaling and the subsequent inflammatory response, a mechanism that remains mechanistically unclear. Existing literature suggests that NOD2 suppresses TLR-mediated activation of NF-κB signaling pathway via interferon regulatory factor 4 (IRF4), which can be induced in an MDP-independent manner [ 15 ]. However, it remains obscured whether NOD2 attenuates the HMGB1/TLR4 signaling pathway via the induction of IRF4 in an MDP-independent manner. Additionally, as reported, recruitment of receptor-interacting protein 2 (RIP2, also known as RICK), one of the downstream adaptor kinases, is required for NOD2 dependent inflammatory responses induced by MDP [ 28 , 29 ]. NOD2 activates RIP2 via CARD-CARD interaction, and RIP2 subsequently polymerizes into filament formation, which is essential for downstream inflammatory responses [ 30 ]. Further investigation is warranted to determine whether RIP2 participates in NOD2-dependent inhibition of macrophage activation, and whether there is collaboration between RIP2 and IRF4, considering the latter’s anticipated role as previously reported [ 31 ].
As OA involves pathological changes in multiple types of tissues and cells, including macrophages and FLSs in synovial tissue, and chondrocytes in articular cartilage [ 32 ], leading to the concept that constituents of joints should be given balanced consideration [ 18 ]. In this study, resting macrophages stimulated with HMGB1 were cultured to collect supernatants, which were then added into FLSs and chondrocytes, respectively, to simulate the in vivo intra-articular environment. This approach allowed for the evaluation of the effect of altered NOD2 expression in macrophages on synovial inflammation and cartilage degradation [ 33 ]. The inflammatory phenotype of FLSs was characterized by increased invasion and migration, along with polarized formation of lamellipodia with colocalization of phosphorylated focal adhesion kinase (p-FAK) [ 34 , 35 ]. Chondrocyte inflammatory phenotype involved matrix degradation as well as an imbalance of anabolic and catabolic factors [ 36 , 37 ]. Given the vital role of macrophages in orchestrating the inflammatory process during OA pathogenesis [ 38 ], our findings revealed the significance of NOD2 through in vitro experiments demonstrating that the overexpression of NOD2 alters the paracrine effects of activated macrophages on FLSs and chondrocytes. However, the regulatory mechanism of macrophage NOD2 remains far from comprehensively elucidated.
Recognizing synovial macrophages of joints as potential targets for osteoarthritis prevention and treatment [ 9 , 39 ], we further performed in vivo experiments to explore feasible interventions in mice. Commonly used OA models in mice include destabilization of medial meniscus (DMM), anterior cruciate ligament transection (ACLT), intra-articular injection of mono-iodoacetate (MIA), and collagenase-induced osteoarthritis (CIOA). CIOA mouse model is characterized by high synovial hyperplasia and, thus, considered more methodologically rigorous in assessing the effect of macrophages [ 9 , 16 ]. Intra-articular injection of lentiviral vectors was employed as an efficient approach to manipulating the expression of a target gene in vivo [ 19 , 40 ]. The results suggested that in vivo overexpression of NOD2 via lentiviral vectors significantly mitigates the severity of mouse OA. These findings open possibilities for potential clinical translation in the future, offering avenues for the prevention and treatment of OA.
There are various intra-articular agents for local application to affected joints. Among them, glucocorticoids have been recommended for osteoarthritic patients suffering from pain, due to their potent short-term efficacy in relieving pain [ 41 ]. However, they also increase the risk of joint infection and systemic changes after repeated injection. Therefore, intra-articular agents with higher level of safety and efficacy are urgently needed [ 42 ]. Gene therapy via lentiviral vectors has emerged as a promising therapeutic option and has been applied in clinical trials since August of 2017. The first lentivirus-mediated cellular therapy, tisagenlecleucel (CTL019, Kymriah), was approved in the USA for the treatment of acute lymphoblastic leukemia in children and adolescents. Theoretically, lentiviral vectors pose risks of insertional mutagenesis, but available clinical data suggest that newer generation vectors strongly reduce the risk, as no relevant case has been reported to date [ 43 ]. Thus, lentivirus-mediated macrophage-targeted therapy would represent a promising approach for osteoarthritis.
Modulating the M1/M2 balance by inhibiting M1 macrophages has been developed in preclinical mouse models in osteoarthritis research. Intra-articular injection effectively avoids systemic application, and local application delivers a lower impact on systemic immunity [ 44 ]. Additionally, synovial macrophages are derived from circulating macrophages, and there is currently no evidence that synovial macrophage might have the chance to return to circulating blood stream on a clinically significant scale. Although promising, this approach may be associated with pathological consequences, including promoting autoimmune or inflammatory diseases, which has not been definitely established yet [ 45 ]. Therefore, a comprehensive understanding of macrophage phenotypic and functional heterogeneity in synovial tissue should be acquired, before we can target specific subsets of macrophages in translational studies to ensure a balance between therapeutic benefits and potential risks in the future. | Conclusions
In this study, NOD2 emerged as a critical inhibitor of macrophage activation and M1 polarization, particularly in response to HMGB1 stimulation, by acting as a reciprocal modulator of the HMGB1/TLR4 signaling pathway in macrophages. This, in turn, reshapes the paracrine effect of activated macrophages on FLS and chondrocytes during OA pathogenesis. In conclusion, our findings highlighted the impressive potential of NOD2 to be a preventative and therapeutic target in OA, though more in-depth investigations are indispensable to fully elucidate the underlying mechanisms before definitive conclusions can be reached. | Objective
Synovial inflammation, which precedes other pathological changes in osteoarthritis (OA), is primarily initiated by activation and M1 polarization of macrophages. While macrophages play a pivotal role in the inflammatory process of OA, the mechanisms underlying their activation and polarization remain incompletely elucidated. This study aims to investigate the role of NOD2 as a reciprocal modulator of HMGB1/TLR4 signaling in macrophage activation and polarization during OA pathogenesis.
Design
We examined NOD2 expression in the synovium and determined the impact of NOD2 on macrophage activation and polarization by knockdown and overexpression models in vitro. Paracrine effect of macrophages on fibroblast-like synoviocytes (FLS) and chondrocytes was evaluated under conditions of NOD2 overexpression. Additionally, the in vivo effect of NOD2 was assessed using collagenase VII induced OA model in mice.
Results
Expression of NOD2 was elevated in osteoarthritic synovium. In vitro experiments demonstrated that NOD2 serves as a negative regulator of HMGB1/TLR4 signaling pathway. Furthermore, NOD2 overexpression hampered the inflammatory paracrine effect of macrophages on FLS and chondrocytes. In vivo experiments revealed that NOD2 overexpression mitigated OA in mice.
Conclusions
Supported by convincing evidence on the inhibitory role of NOD2 in modulating the activation and M1 polarization of synovial macrophages, this study provided novel insights into the involvement of innate immunity in OA pathogenesis and highlighted NOD2 as a potential target for the prevention and treatment of OA.
Supplementary Information
The online version contains supplementary material available at 10.1186/s13075-023-03230-4.
Keywords | Supplementary Information
| Abbreviations
Osteoarthritis
Fibroblast-like synoviocyte
Magnetic resonance imaging
Damage-associated molecular pattern
High-mobility group box-1
Pattern recognition receptor
Toll-like receptor
Tumor necrosis factor α
Nucleotide-binding oligomerization domain containing 2
Collagenase-induced osteoarthritis
Specific pathogen free
NOD2 overexpression
Osteoarthritis Research Society International
Immunohistochemical
Paraformaldehyde
Bovine serum albumin
Horseradish peroxidase
3,3′-Diaminobenzidine
4′,6-Diamidino-2-phenylindole
Bone marrow derived macrophage
Macrophage colony-stimulating factor
Dulbecco’s modified Eagle’s medium
Fetal bovine serum
Muramyl dipeptide
Polymerase chain reaction
Radioimmunoprecipitation assay
Phenylmethylsulphonyl fluoride
Polyvinylidene difluoride
Sodium dodecyl-sulfate polyacrylamide gel electrophoresis
Enzyme-linked immunosorbent assay
Small interfering RNA
Lentivirus
Differentially expressed gene
Micro-computed tomography
Region of interest
Optical density score
Analysis of variance
Protein-protein interaction network
NOD-like receptor
NLR with a CARD2
Caspase recruitment domain-containing protein 15
Leucine-rich repeats
Phosphorated focal adhesion kinase
Destabilization of medial meniscus
Anterior cruciate ligament transection
Mono-iodoacetate
Acknowledgements
The authors would like to express their gratitude for all the patients who donated the specimens for the research in this study.
Authors’ contributions
CCL, SPL, and YD conceived the study and planned the design of experiments. CCL, ZJOY, YHH, PDG, and THL performed the experiments and analyzed the data. CCL wrote the manuscript. SXL, JX, JLW, ZC, and HYW assisted with interpretation of the data and provided critical revision of the article for important intellectual content. CCL, ZJOY, YHH, SPL, and YD take responsibility for the integrity of the work as a whole. All authors read and approved the final manuscript.
Funding
This work was supported by Guangdong Medical Research Foundation [A2020094], Guangdong Basic and Applied Basic Research Foundation [2021A1515110996, 2023A1515010463], and Science and Technology Project of Guangzhou [202102020132, 202206010140].
Availability of data and materials
The data supporting the findings of this study are available from the corresponding author upon reasonable request.
Declarations
Ethics approval and consent to participate
Written informed consents were obtained before the surgery, with approval from the Ethics Committee of Sun Yat-sen Memorial Hospital (Approval No. SYSEC-KY-KS-2021–243). All protocols and experiments were approved by the Institutional Animal Care and Use Committee of Sun Yat-sen University (Approval No. 20220223).
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:05 | Arthritis Res Ther. 2023 Dec 20; 25:249 | oa_package/07/6b/PMC10731717.tar.gz |
PMC10739567 | 38129578 | Introduction
Malnutrition is an important problem among oncology patients, with estimated rates ranging from 30.9% to 83%, depending on cancer location and patient age [ 1 – 4 ]. The muscle wasting disorders cachexia and sarcopenia are commonly associated with malnutrition in cancer patients; an estimated 50–80% of cancer patients have cachexia and 20–70% have sarcopenia (depending on tumour type) [ 5 – 7 ]. The consequences of cancer-related muscle wasting include increased mortality [ 8 , 9 ], negative effects on treatments (e.g. toxicities, termination of treatment, poor response, reduced tolerance) [ 8 – 10 ], and increased risk of post-operative complications [ 11 ]. Malnutrition in oncology patients can also result in decreased functional capacity [ 12 ], psychosocial symptoms [ 12 ], and lower health-related quality of life [ 13 ]. Furthermore, oncology patients who are malnourished (or who are at risk of malnutrition) spend more time in hospital [ 1 , 14 ] and are readmitted more often [ 15 , 16 ], constituting a substantial economic burden.
Nutritional interventions, including nutritional counselling, oral nutrition supplements, or enteral nutrition, are used to prevent or manage malnutrition. However, when these options are not feasible, are contraindicated, or ineffective, parenteral nutrition (PN) is recommended [ 17 – 20 ]. Parenteral nutrition is the intravenous administration of nutrients such as amino acids, glucose, lipids, electrolytes, vitamins, and trace elements, and can be delivered either at home (home parenteral nutrition [HPN]), or in a hospital setting [ 21 ].
PN is commonly used in hospitals to provide supplemental or total nutrition support to patients who are unable to maintain their nutritional status via the oral or enteral route [ 22 ]. In some cases (such as advanced oncology), patients require long-term PN, which necessitates the use of HPN [ 22 ]. However, oncology inpatients on PN (especially if intensive care unit (ICU) patients) and outpatients on HPN have completely different rates of infections as well as clinical outcomes. Whilst concerns regarding catheter-related infections had previously limited the use of HPN, its application is becoming more common, increasing by 55% in Italy between 2005 and 2012 [ 23 ].
In oncology patients, adequate protein consumption has been linked to lower rates of malnutrition, improved treatment outcomes, and longer survival [ 19 , 24 , 25 ]. As such, the European Society for Clinical Nutrition and Metabolism (ESPEN) guidelines recommend consuming at least 1.0 g/kg/day of protein (Table 1 ) [ 7 , 19 ]. This recommendation is higher than the requirement for healthy individuals (0.8 g/kg/day), reflecting the positive correlation between higher protein intake, protein balance, and muscle mass [ 7 , 19 , 25 ]. However, increased energy and protein intake may not prevent or reduce weight loss in all patients. Anabolic resistance may be present in oncology patients, hence higher amounts of protein (≥ 1.2 and possibly up to 2 g/kg/day) may be required to balance protein synthesis than in normal individuals [ 7 , 19 , 26 ]. It has also been suggested that older patients with severe illness or malnutrition may need up to 2.0 g/kg/day [ 27 ]. High-protein PN (> 1.5 g/kg/day) could therefore be particularly beneficial for these patients, to rebuild muscle mass and prevent further muscle loss. High-protein PN at this dose has already been shown to be effective in other patient populations, such as critically ill patients in the ICU setting [ 28 , 29 ].
The aim of this systematic literature review (SLR) was to understand the value of high-protein HPN and its impact on outcomes in malnourished oncology patients. Specifically, the SLR sought to identify and collate published studies on malnourished cancer patients receiving HPN, in which protein/amino acid delivery was reported in g/kg/day, to compare outcomes between patients receiving low (< 1 g/kg/day), standard (1–1.5 g/kg/day), and high-protein doses (> 1.5 g/kg/day). | Methods
The SLR was performed in accordance with Cochrane Collaboration [ 30 ], Centre for Reviews and Dissemination (CRD) [ 31 ], and Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 32 ].
Literature sources and searches
Electronic database searches were performed on October 5 th , 2021, in Embase, MEDLINE, the Cochrane Database of Systematic Reviews (CDSR), the Cochrane Central Register of Controlled Trials (CENTRAL), the Database of Abstracts of Reviews of Effects (DARE), the National Health Service Economic Evaluation Database (NHS EED), and the Health Technology Assessment Database (HTAD). All databases were searched via the Ovid platform, with the searches date-limited from 2005 to ensure that only contemporary data were captured. The database searches were complemented by hand-searching of the National Institutes of Health (NIH) trial registry ( https://clinicaltrials.gov/ ), and the proceedings of seven oncology- and nutrition-themed conferences held since January 2019. Conference hand-searching was date-limited from 2019 onwards as it was presumed that any high-quality abstracts presented before this date would now be available as full publications. The bibliographic reference lists of included studies and relevant SLRs and meta-analyses identified during screening were also hand-searched. Full details of the SLR search strategy are provided in Supplement 1. The SLR protocol was not pre-registered in any protocol registry or online repository.
Study selection criteria
The population, intervention, comparator(s), outcomes, and study design (PICOS) elements used to assess study eligibility are presented in Supplement 2. Studies were eligible for inclusion if they reported on malnourished oncology outpatients receiving HPN, and protein or amino acid delivery was reported in g/kg/day. Oncology patients were considered to be malnourished if: (a) the publication explicitly described patients as having malnutrition of any kind, low body weight, clinically significant weight loss, low body mass index (BMI), clinically significant BMI reduction, cachexia, sarcopenia, or muscle wasting/loss/atrophy; and/or (b) cancer stage was described as incurable, non-curative, palliative, end-of-life, advanced, metastatic, late stage, stage IV, or hospice-treated. This approach was adopted given that patients with advanced cancer are typically only prescribed PN if malnourished or have a non-functioning gastrointestinal tract [ 19 ]. Eligible study designs included randomised controlled trials (RCTs), non-randomised multi-arm trials, single-arm trials, and prospective or retrospective observational studies. Eligible studies must have been written in English, although there were no restrictions on country of origin.
Screening and extraction
All publications were screened against the predefined eligibility criteria by two independent reviewers at both the title/abstract and full-text screening stages. Any conflicts were resolved via dialogue between the two reviewers, and where necessary, a third reviewer provided arbitration. Full lists of included and excluded publications are provided in Supplement 3. Data from included publications were extracted into standardised data extraction tables in Microsoft® Excel by one individual, with all information checked and validated by a second individual. Data extracted from eligible publications included the country of origin, study design, study dates, sample size, participant age, sex, cancer stage and location, performance status, nutritional status, PN type (total or supplemental), protein dose, and key study findings, including clinical, safety, and quality of life outcomes. Energy data provided in kJ were converted into kcal by dividing by 4.184. No formal risk of bias assessment was performed due to the heterogeneity of study designs encountered, and the fact that quantitative data synthesis was not conducted. | Results
The electronic database searches identified 3,333 citations. After removal of 877 duplicates, 2,133 publications at the title/abstract screening stage, and 305 publications at the full-text screening stage, 18 publications from the electronic database searches were deemed eligible for inclusion in the SLR. Hand-searching yielded one additional eligible publication, resulting in a total of 19 publications included in the SLR (Fig. 1 ).
Study and patient characteristics
Detailed study characteristics are presented in Table 2 . The 19 included publications consisted of one RCT [ 33 ], one single-arm trial [ 34 ], 10 prospective observational studies [ 35 – 44 ], and seven retrospective observational studies [ 24 , 45 – 50 ]. The most common country of origin was Italy (11 publications) [ 35 , 36 , 38 – 42 , 46 – 48 , 50 ], followed by Denmark (four publications) [ 33 , 35 , 36 , 45 ]. Sample size in the included publications ranged from 19 [ 50 ] to 1,014 [ 47 ], while the age of patients was between 48.8 (mean) [ 49 ] and 68 years (median) [ 41 , 43 ]. The most common types of cancer were gastrointestinal (17 publications) [ 24 , 33 , 35 – 39 , 41 – 50 ], pancreatic (12 publications) [ 24 , 33 – 36 , 38 , 39 , 41 , 44 , 47 , 48 , 50 ], and ovarian (11 publications) [ 35 , 36 , 38 , 40 – 42 , 44 , 47 – 50 ]. Cancer stage was described as advanced or Stage III in 10 publications [ 24 , 38 – 44 , 47 , 49 ], metastatic or Stage IV in nine publications [ 34 , 37 – 43 , 45 ], and incurable, palliative, or terminal in nine publications [ 24 , 33 , 35 , 36 , 45 – 48 , 50 ]. Twelve publications reported details of prior or concurrent anticancer treatments [ 24 , 33 , 35 , 37 – 44 , 47 , 49 ], while seven publications did not [ 24 , 34 , 36 , 45 , 46 , 48 , 50 ].
Overview of PN intervention reported
Details of the PN interventions are described in Table 3 . The type of PN was a mix of total and supplemental PN in seven publications [ 37 , 39 – 42 , 44 , 47 ], total PN alone in five publications [ 35 , 36 , 48 – 50 ], supplemental PN alone in five publications [ 33 , 34 , 38 , 43 , 45 ], and was unclear in the two remaining publications [ 24 , 46 ]. Protein dose ranged from 0.77 to 1.5 g/kg/day [ 24 ]. Sixteen publications investigated standard-protein doses (1–1.5 g/kg/day) [ 33 – 42 , 44 , 46 – 50 ], two reported on low-protein doses (< 1 g/kg/day) [ 43 , 45 ], and one included both [ 24 ], but none involved high-protein doses (> 1.5 g/kg/day). For publications that reported a target dose but not the dose delivered, it was assumed that the target dose was the dose delivered. In two publications [ 35 , 36 ], targeted protein delivery was reported only as ≥ 1 g/kg/day; in the absence of an upper limit, it was assumed that targeted protein delivery fell within the standard range (1–1.5 g/kg/day). Seventeen publications reported energy intake [ 33 , 35 – 50 ], which ranged between 19.7 [ 24 ] and 40.2 [ 33 ] kcal/kg/day. The duration of PN administration was reported in nine studies [ 24 , 33 – 35 , 38 , 42 – 45 ]; the shortest duration was 28 days (total) [ 37 ] and the longest was 364.9 days (median) [ 45 ].
Change in body mass
Nine studies reported changes in body mass or BMI (Table 3 ) [ 33 , 34 , 37 , 39 , 43 – 45 , 47 , 48 ]. Of these, Culine et al. (2014) [ 37 ], Cotogni et al. (2018) [ 39 ], Vashi et al. (2014) [ 44 ], Goodrose-Flores et al. (2020) [ 24 ], and Santarpia et al. (2006) [ 48 ] reported that receipt of protein-containing HPN significantly increased body mass or BMI over the course of the study period, although all studies administered standard doses of protein (range: 1–1.5 g/kg).
The RCT by Obling et al. (2019) [ 33 ] reported that, with standard protein doses, median BMI increased between baseline and visit 5 in both the non-supplemental HPN (sHPN) group (best practice nutritional care and dietetic counselling) (from 21.3 to 22.9 m/kg 2 ) and the sHPN group (sHPN and dietetic counselling) (from 21.5 to 23.5 m/kg 2 ), but did not report whether the change from baseline was significant. Ruggeri et al. (2021) [ 47 ] reported that one month of sHPN increased BMI to a greater extent than one month of total HPN (0.21 vs 0.04, respectively), with a mean protein dose of 1.3 g/kg, but did not report whether the change from baseline was significant.
Two studies, Ma et al. (2021) [ 43 ] and Pelzer et al. (2010) [ 34 ], reported that low to standard protein (0.6–1.5 g/kg/day) did not significantly increase body weight or BMI during the study period.
Change in other outcomes
Due to the high heterogeneity in the other outcomes examined, only change in body mass is reported in this publication. For details of the other reported outcomes, please see Supplement 4.
Key studies identified
The only study to compare two different protein doses in malnourished oncology patients was Goodrose-Flores et al. (2020) [ 24 ], a retrospective analysis of medical records of 124 patients receiving palliative cancer care in Sweden between 2016 and 2018. The most common type of cancer was gastrointestinal (40%). One group of patients ( n = 20) received a mean 1.15 g/kg/day of protein, while the other group ( n = 104) received a mean 0.77 g/kg/day. Percentage weight gain from baseline was calculated after patients had received HPN for between three weeks and two months; average weight gain was significantly greater amongst patients receiving the higher of the two protein doses (3.3% vs 0.12%, p = 0.04). To investigate the safety of the HPN interventions, liver enzymes were assessed in 85 patients (1.15 g/kg/day group, n = 19; 0.77 g/kg/day group, n = 66), as an indicator of possible liver dysfunction. The proportion of patients with elevated liver enzymes did not differ significantly between the two treatment groups ( p = 0.34); elevated liver enzymes were observed in 11% and 24% of patients in the 1.15 and 0.77 g/kg/day groups, respectively.
In the only RCT identified by the SLR, Obling et al. (2019) compared sHPN (sHPN and dietetic counselling) with non-sHPN (best practice nutritional care and dietetic counselling) [ 33 ]. The sHPN group did not receive significantly more energy, but protein intake was significantly higher at visits 2, 3, and 5 (95% confidence interval [CI]: 0.38, 0.47; p < 0.05). Overall, median protein intake ranged from 1.08–1.39 g/kg/day in the sHPN arm vs 1.10–1.16 g/kg/day in the non-sHPN arm. After 12 weeks, 69% of patients in the sHPN group had increased their fat-free mass, compared with 40% of patients in the non-sHPN group ( p < 0.01).
The only other comparative study was the prospective observational study by Cotogni et al. (2022) [ 41 ], which compared HPN with artificial hydration. The prescribed energy of HPN was 25–30 kcal/kg/day, with prescribed protein of 1–1.5 g/kg/day, while the artificial hydration group received balanced salt solutions of 1L, 1.5L, or 2L, depending on their body mass. The results demonstrated that patients on HPN survived for significantly longer than those on artificial hydration (median overall survival was 4.3 vs 1.5 months, respectively, 95% CI: 0.015, 0.059; p < 0.001) [ 41 ].
Lastly, the prospective observational study by Vashi et al. (2014) [ 44 ] had a targeted protein dose of 1.5–2 g/kg/day for patients with BMI < 30 kg/m 2 , and 2–2.5 g/kg/day for patients with BMI ≥ 30 kg/m 2 . However, actual protein delivery was only 1.3–1.5 g/kg/day (within the standard range). In this publication, HPN was associated with the greatest improvements in quality of life and bodyweight at 3 months compared with baseline (global quality of life score: 54.4 vs 30.6, respectively; p = 0.02; weight: 65.9 vs 61.1 kg, respectively; p = 0.04). | Discussion
To the best of our knowledge, this was the first SLR with the aim of comparing outcomes between malnourished oncology patients receiving low- (< 1 g/kg/day), standard- (1–1.5 g/kg/day), or high-protein HPN doses (> 1.5 g/kg/day). However, no studies were identified reporting on high-protein HPN in this population. Therefore, to assess the suitability of high-protein HPN in oncology patients in the absence of relevant studies identified by the SLR, a broader approach was taken and evidence from alternative settings and populations will also be discussed.
The SLR identified one study where two different protein doses were compared: Goodrose-Flores et al. (2020) [ 24 ]. This retrospective observational study examined the effect of 0.77 g/kg/day vs 1.15 g/kg/day protein HPN in malnourished oncology patients. Although the 1.15 g/kg/day group did not receive what would be considered a high-protein dose (i.e. > 1.5 g/kg/day), the results were promising and support the notion of increased protein intake. Patients in the 1.15 g/kg/day group gained significantly more weight than those in the 0.77 g/kg/day group, with no evidence that they were at greater risk of liver damage.
The same research group provide further evidence for the safety of increased protein in Schedin et al. (2020) [ 51 ]. This publication was not eligible for inclusion in the SLR, since the sample included a mixture of oncology and non-oncology patients. Schedin et al. (2020) assessed potential risk factors for catheter-related bloodstream infection (CRBSI) in palliative care patients receiving HPN, and one risk factor considered was the protein content of PN (median protein delivery was 1.20, 0.82, or 0.58 g/kg/day). This publication found no statistically significant effect of the three different protein doses of HPN on the incidence of CRBSI ( p = 0.13). However, as both Goodrose-Flores et al. (2020) and Schedin et al. (2020) compared standard vs low rather than standard vs high-protein doses, one cannot conclude that increasing the protein dose above 1.5 g/kg/day would yield additional clinical benefits without increasing adverse events [ 24 , 51 ].
The SLR identified three other studies of note, Obling et al. (2019) [ 33 ], Cotogni et al. (2022) [ 41 ], and Vashi et al. (2014) [ 44 ]. In the RCT by Obling et al. (2019), increased fat-free mass was observed in a significantly greater proportion of patients in the group receiving more protein, despite energy intakes not being significantly different. Although the protein doses were in the standard range (1–1.5 g/kg/day), this study provides further evidence supporting the use of increased protein compared with lower protein in malnourished oncology patients using HPN. In the prospective observational study by Cotogni et al. (2022) [ 41 ], patients receiving HPN survived for significantly longer than those on artificial hydration. However, it is unclear if there were differences in energy intake between the two groups, which may have implications for this result. In the prospective observational study by Vashi et al. (2014) [ 44 ], the target protein dose was 1.5–2 g/kg/day for patients with BMI < 30 kg/m 2 , and 2–2.5 g/kg/day for patients with BMI ≥ 30 kg/m 2 , making it the only publication identified by the SLR that explicitly aimed for high-protein intake. However, actual protein delivery was 1.3–1.5 g/kg/day, which potentially highlights the difficulty of meeting such high protein targets.
Bouleuc et al. (2020) reported that PN (protein range: 1.2–1.5 g/kg/day) did not improve quality of life or survival, and was associated with more serious adverse events (mainly infections) than oral feeding ( p = 0.01) [ 52 ]. This study was not eligible for inclusion in the SLR, as it was unclear what proportion of cancer patients received PN at home as opposed to in a hospital setting. In addition, this study was not generalisable to the current research question, as in the PN arm, 46% of patients had an Eastern Cooperative Oncology Group (ECOG) performance status of 3 or 4, and therefore the study’s inclusion criteria did not comply with indications for HPN according to recent guidelines [ 19 ]. Additionally, in the PN arm, 60% of patients had gained weight or had 0–5% weight loss in the previous month, and so may not have been malnourished.
Overall, the results of the studies identified by the SLR lend support to the idea that increased protein intake could benefit malnourished oncology patients. However, since no studies evaluated high-protein doses, statistical analyses to compare the included studies were not feasible. Therefore, there is a clear need for future studies to determine optimal protein dose by comparing alternative doses within a single patient population.
In theory, the safety and efficacy of high-protein HPN is biologically plausible. Net muscle protein balance is required for increasing skeletal muscle mass, and nutrition is a potent anabolic stimulus [ 53 ]. Specifically, the postprandial increase in circulating amino acids stimulates muscle protein synthesis [ 53 ]. Winter et al. (2012) reported that in ten male patients with non-small cell lung cancer, protein synthesis was stimulated by increased amino acid provision resulting in hyperaminoacidaemia with increased peripheral glucose uptake [ 54 ]. Furthermore, administration of the branched chain amino acids leucine and valine increased skeletal muscle protein synthesis in a mouse model without any measurable effect on tumour mass [ 55 ]. Similarly, supplementation with leucine (0.052 g/kg of bodyweight) has been demonstrated to increase skeletal muscle protein synthesis in healthy elderly men [ 56 ]. Furthermore, intravenous administration of up to 2.0 g/kg/day amino acids demonstrated safety in an RCT of 474 ICU patients [ 57 ]. Taken together, these studies suggest that high-protein PN is an effective and safe practice, at least acutely. Notably, older oncology patients appear to have anabolic resistance to protein, although the same does not appear to be true for younger patients [ 54 , 58 , 59 ].
Research in non-PN settings and critically ill patients has demonstrated the value of increased protein intake for quality of life, prevention of sarcopenia, and mortality [ 60 – 63 ]. In a retrospective study of adult outpatients with advanced gastrointestinal cancer, Pimentel et al. (2021) reported that although a high-protein oral diet (2.2 ± 0.8 g/kg/day) was not associated with better muscle function as measured by handgrip strength, increased protein intake was associated with increased overall survival, compared with a low protein diet (0.8 ± 0.4 g/kg/day) [ 62 ]. Ferrie et al. (2016) [ 61 ], a double-blinded RCT of 119 critically ill patients, demonstrated that, when compared with 0.9 g/kg/day amino acids, 1.1 g/kg/day amino acids was associated with small improvements in several measures (grip strength, less fatigue, greater forearm muscle thickness, and better nitrogen balance), with no difference between groups in mortality or length of stay. In another RCT by De Azevedo et al. (2021) [ 63 ], a high-protein oral diet supplemented by PN (1.48 g/kg/day) and resistance exercise significantly improved the physical quality of life and survival of critically ill patients at 3- ( p = 0.01) and 6-months ( p = 0.001) compared with a control group receiving 1.19 g/kg/day. Mortality was also significantly lower ( p = 0.006). Additionally, a recent SLR investigated the impact of protein intake on muscle mass in cancer patients; across eight included studies, protein intake < 1.2 g/kg was associated with muscle wasting, whereas protein intake > 1.4 g/kg was associated with muscle maintenance [ 64 ]. These studies were not eligible for inclusion in the current SLR, as the route of feeding was oral and/or enteral.
Two large clinical trials, EFFORT and NEXIS, should yield further valuable data regarding high-protein nutrition in the near future, although neither are perfectly aligned with the current research question. The EFFORT trial will compare protein targets of ≤ 1.2 g/kg/day and ≥ 2.2 g/kg/day, while the NEXIS trial will contrast patients on standard care with those completing an in-bed exercise regime and receiving total protein delivery of 2.0–2.5 g/kg/day. Both studies investigate ICU-based nutrition rather than HPN, and PN is not mandatory (protein targets can be met by any combination of enteral nutrition, oral supplements, and PN). Furthermore, the populations are not limited to cancer patients (critically ill patients of any kind are eligible). Similar studies should be performed with malnourished oncology patients to determine optimal HPN protein dosing, and the impact of combining nutrition and exercise to improve outcomes.
The main limitation of the present SLR was the absence of publications reporting on high-protein (> 1.5 g/kg/day) HPN in malnourished oncology patients. Although a subsequent targeted literature search identified studies demonstrating effective high-protein PN in other populations and settings, the approach has not yet been translated to HPN and oncology. Hence, in the absence of further data, the threshold between high- and low-protein content remains subjective, ensuring ongoing debate regarding optimal protein delivery. For example, while the ESPEN Expert Group [ 65 ] recommends older adults with acute or chronic illnesses consume 1.2 to 1.5 g/kg/day (and even more for those with severe illness or injury), Op den Kamp et al. 2009 [ 66 ] emphasize a baseline of 1.5 g/kg/day or 15–20% of the total caloric intake for patients with cachexia, while Bauer et al. 2019 [ 67 ] advocate for an intake ranging from 1 to 1.5 g/kg/day paired with physical exercise for patients with sarcopenia.
Furthermore, only one RCT was identified. A major limitation of non-randomised studies is when outcomes of patients with PN are compared with outcomes of patients without PN, these patients can differ. Such comparisons can be inherently biased, as patients selected for PN often present with more severe malnutrition and its associated complications than those not eligible for PN. This disparity introduces potential confounding variables, and any conclusions should be interpreted with caution.
In addition, two publications identified by the SLR involved patients from multiple European countries including the UK, but did not present results by country [ 35 , 36 ]. This is notable, as the use of PN differs across Europe; in many countries PN is supplemental, whereas it is often used for intestinal failure in the UK, and thus unlikely to be supplemental.
Lastly, this SLR focused on HPN rather than PN in an inpatient setting, as the two populations are not directly comparable. Inpatients typically require short-term as opposed to long-term PN, while the incidence of CRBSIs, other complications, and mortality are also different. The SLR focused on outpatients alone because it is a more homogenous population. | Conclusions
Despite the biological plausibility and emerging evidence from critically ill patients, at the time of writing there is a lack of evidence investigating and supporting the use of high-protein HPN in malnourished oncology patients. A minimum of 1.5 g/kg/day or > 20% of total caloric intake from protein appears to be optimal for elderly individuals and advanced cancer inpatients. However, whether this is also appropriate for HPN in oncology patients remains to be determined. Studies using a variety of designs (such as acute single arm safety studies and longer-term comparative studies with multiple protein doses) are needed to establish the efficacy and safety of this promising approach. | Introduction
Up to 83% of oncology patients are affected by cancer-related malnutrition, depending on tumour location and patient age. Parenteral nutrition can be used to manage malnutrition, but there is no clear consensus as to the optimal protein dosage. The objective of this systematic literature review (SLR) was to identify studies on malnourished oncology patients receiving home parenteral nutrition (HPN) where protein or amino acid delivery was reported in g/kg bodyweight/day, and to compare outcomes between patients receiving low (< 1 g/kg bodyweight/day), standard (1–1.5 g/kg/day), and high-protein doses (> 1.5 g/kg/day).
Methods
Literature searches were performed on 5 th October 2021 in Embase, MEDLINE, and five Cochrane Library and Centre for Reviews and Dissemination databases. Searches were complemented by hand-searching of conference proceedings, a clinical trial registry, and bibliographic reference lists of included studies and relevant SLRs/meta-analyses.
Results
Nineteen publications were included; sixteen investigated standard protein, two reported low protein, and one included both, but none assessed high-protein doses. Only one randomised controlled trial (RCT) was identified; all other studies were observational studies. The only study to compare two protein doses reported significantly greater weight gain in patients receiving 1.15 g/kg/day than those receiving 0.77 g/kg/day.
Conclusion
At present, there is insufficient evidence to determine the optimal protein dosage for malnourished oncology patients receiving HPN. Data from non-HPN studies and critically ill patients indicate that high-protein interventions are associated with increased overall survival and quality of life; further studies are needed to establish whether the same applies in malnourished oncology patients.
Keywords | Author contributions
Pilar Garcia Lorda and Julian Shepelev contributed to the conception of the research; Paolo Cotogni, Clare Shaw, Paula Jimenez-Fonseca, Dom Partridge, David Pritchett, and Neil Webb contributed to the conception and design of the research; all authors contributed to acquisition, analysis, or interpretation of the data. All authors drafted the manuscript, critically revised the manuscript, agree to be fully accountable for ensuring the integrity and accuracy of the work, and read and approved the final manuscript.
Funding
This study was supported by Baxter Healthcare SA, Switzerland. The funder had no control over the systematic literature review study design, data collection, analysis, or interpretation of data in the writing of the report.
Data Availability
The data that support the findings of this study are available from the corresponding author on reasonable request.
Declarations
Competing interests
Paolo Cotogni, Clare Shaw, and Paula Jimenez-Fonseca received no sponsorship for participation in this study and declare that they have no competing interests in relation to this study. Paolo Cotogni reports previous speakers’ honoraria from Baxter International. Clare Shaw had received honoraria from BSNA, Boehringer Ingelheim and Eli Lilly. Dominic Partridge, David Pritchett, Neil Webb, and Amy Crompton are employees of Source Health Economics, the company that conducted the systematic literature review. Julian Shepelev and Pilar Garcia Lorda are employees of Baxter Healthcare. | CC BY | no | 2024-01-16 23:35:02 | Support Care Cancer. 2024 Dec 22; 32(1):52 | oa_package/05/bf/PMC10739567.tar.gz |
|
PMC10748576 | 38057149 | Materials and Methods
Reagents
Six linear oligopeptides (>95% pure) acetylated and amidated at the N- and C-termini, respectively ( Table 1 ), the linear (Vi45-51), and the cyclic-retro-inverse-vasoinhibin-(45-51)-peptide (CRIVi45–51) were synthesized by GenScript (Piscataway, NJ). Recombinant vasoinhibin isoforms of 123 (Vi1-123) ( 16 ) or 48 residues (Vi1-48) ( 15 ) were produced as reported. Recombinant human prolactin (PRL) was provided by Michael E. Hodsdon ( 17 ) (Yale University, New Haven, CT). Human recombinant plasminogen activator inhibitor 1 (PAI-1) was from Thermo Fisher Scientific (Waltham, MA), and human tissue plasminogen activator (tPA) from Sigma Aldrich (St. Louis, MO). Rabbit monoclonal anti-PAI-1 [EPR17796] (ab187263, RRID:AB_2943367) and rabbit polyclonal anti-β-tubulin antibodies (Cat# ab6046, RRID:AB_2210370) were purchased from Abcam (Cambridge, UK), and mouse monoclonal anti-uPA receptor (anti-uPAR) from R&D systems (Minneapolis, MN, Cat# MAB807, RRID:AB_2165463). The NF-κB activation inhibitor BAY 11-7085 and lipopolysaccharides (LPS) from Escherichia coli O55:B5 were from Sigma Aldrich. Recombinant human vascular endothelial growth factor-165 (VEGF) was from GenScript, and basic fibroblast growth factor (bFGF) was donated by Scios, Inc. (Mountain View, CA).
Cell Culture
Human umbilical vein endothelial cells (HUVEC) were isolated ( 18 ) and cultured in F12K medium supplemented with 20% fetal bovine serum (FBS), 100 μg mL −1 heparin (Sigma Aldrich), 25 μg mL −1 endothelial cell growth supplement (ECGS) (Corning, Glendale, AZ), and 100 U mL −1 penicillin-streptomycin.
Cell Proliferation
HUVEC were seeded at 14 000 cells cm −2 in a 96-well plate and, after 24 hours, starved with 0.5% FBS, F12K for 12 hours. Treatments were added in 20% FBS, F12K containing 100 μg mL −1 heparin for 24 hours and consisted of 25 ng mL −1 VEGF and 20 ng mL −1 bFGF alone or in combination with 100 nM PRL (as negative control), 123-residue vasoinhibin (Vi1-123) or 48-residue vasoinhibin (Vi1-48) (positive controls), linear vasoinhibin analogue (Vi45-51), cyclic retro-inverse-vasoinhibin analogue (CRIVi45-51), synthetic oligopeptides mapping region 1 to 48 of vasoinhibin (1-15, 12-25, 20-35, 30-45, or 35-48). DNA synthesis was quantified by the DNA incorporation of the thymidine analogue 5-ethynyl-2′-deoxyuridine (EdU; Sigma Aldrich) (10 μM) added at the time of treatments and labeled by the click reaction with Azide Fluor 545 (Sigma Aldrich) as reported ( 14 , 19 ). Total HUVEC were counterstained with Hoechst 33342 (Sigma Aldrich). Images were obtained in a fluorescence-inverted microscope (Olympus IX51, Japan) and quantified using CellProfiler software ( 20 ).
Cell Invasion
HUVEC invasion was evaluated using the Transwell Matrigel barrier assay ( 21 ). HUVEC were seeded at 28 000 cells cm −2 on the luminal side of an 8-μm-pore insert of a 6.5 mm Transwell (Corning) precoated with 0.38 mg mL −1 Matrigel (BD Biosciences, San Jose, CA) in starvation medium (0.5% FBS F12K, without heparin or ECGS). Treatments were added inside the Transwell and consisted of 100 nM PRL, Vi1-123, Vi1-48, Vi45-51, CRIVi45-51, or the oligopeptides 1-15, 12-25, 20-35, 30-45, or 35-48. Conditioned medium of 3T3L1 cells (ATCC, Manassas, VA) cultured for 2 days in 10% FBS was filtered (0.22 μm), supplemented with 50 ng mL −1 VEGF, and placed in the lower chamber as chemoattractant. Sixteen hours later, cells invading the bottom of the Transwell were fixed, permeabilized, Hoechst-stained, and counted using the CellProfiler software ( 20 ).
Leukocyte Adhesion Assay
HUVEC were seeded on a 96-well plate and grown to confluency. HUVEC monolayers were treated for 16 hours with 100 nM PRL, Vi1-123, Vi1-48, Vi45-51, CRIVi45-51, or the oligopeptides 1-15, 12-25, 20-35, 30-45, 35-48 in 20% FBS, F12K without heparin or ECGS. Treatments were added alone or in combination with anti-PAI-1 (5 μg mL −1 ), anti-uPAR (5 μg mL −1 ), or anti-β-tubulin (5 μg mL −1 ) antibodies. NF-κB activation inhibitor BAY 11-7085 (5 μM) was added 30 minutes prior to treatments. After the 16-hour treatment, HUVEC were exposed to a leukocyte preparation obtained as follows. Briefly, whole blood was collected into EDTA tubes, centrifuged (300 g for 5 minutes), and the plasma layer discarded. The remaining cell pack was diluted 1:10 in red blood lysis buffer (150 mM NH 4 Cl, 10 mM NaHCO 3 , and 1.3 mM EDTA disodium) and rotated for 10 minutes at room temperature. The tube was centrifuged (300 g 5 minutes), and when erythrocytes were no longer visible, leukocytes were collected by discarding the supernatant. Leukocytes were washed with cold phosphate buffered saline (PBS) followed by another centrifugation step (300 g 5 minutes) and resuspended in 5 mL of 5 μg mL −1 of Hoechst 33342 (Thermo Fisher Scientific) diluted in warm PBS. Leukocytes were incubated under 5% CO 2 -air at 37 °C for 30 minutes, washed with PBS 3 times, and resuspended into 20% FBS, F12K to 10 6 leukocytes mL −1 . The medium of HUVEC was replaced with 100 μL of Hoechst-stained leukocytes (10 5 leukocytes per well) and incubated for 1 hour at 37 °C. Finally, HUVEC were washed 3 times with warm PBS, and images were obtained in an inverted fluorescent microscope (Olympus IX51) and quantified using the CellProfiler software ( 20 ).
Apoptosis
HUVEC grown to 80% confluency on 12-well plates were incubated under starving conditions (0.5% FBS F12K) for 4 hours. Then, HUVEC were treated for 24 hours with 100 nM PRL, Vi1-123, Vi1-48, Vi45-51, CRIVi45-51, or the oligopeptides 1-15, 12-25, 20-35, 30-45, or 35-48 in 20% FBS, F12K without heparin or ECGS. Treatments were added alone or in combination with anti-PAI-1 (5 μg mL −1 ), anti-uPAR (5 μg mL −1 ), or anti-β-tubulin (5 μg mL −1 ) antibodies. NF-κB activation inhibitor BAY 11-7085 (5 μM) was added 30 minutes before treatments. Apoptosis was evaluated using the cell death detection enzyme-linked immunosorbent assay (ELISA) kit (Roche, Basel, Switzerland). HUVEC were trypsinized, centrifuged, and resuspended with incubation buffer to 10 5 cells mL −1 . Cells were incubated at room temperature for 30 minutes and centrifugated at 20 000 g for 10 minutes (Avanti J-30I Centrifuge, Beckman Coulter, Brea, CA). The supernatant was collected and diluted 1:5 with incubation buffer (final concentration ∼20 000 cells mL −1 ). HUVEC concentration was standardized, and the assay was carried out according to the manufacturer's instructions, measuring absorbance at 415 nm.
Fibrinolysis Assay
Human blood was collected into a 3.2% sodium citrate tube (BD Vacutainer) and centrifugated (1200 g for 10 minutes at 4 °C) to obtain plasma. Plasma (24 μL) was added to a 96-well microplate containing 20 μL of 50 mM CaCl 2 . Turbidity was measured as an index of clot formation by monitoring absorbance at 405 nm every 5 minutes after plasma addition. Before adding plasma, 0.5 μM of PAI-1 was preincubated in 10 mM Tris–0.01% Tween 20 (pH 7.5) at 37 °C for 10 minutes alone or in combination with 3 μM Vi1-123, Vi1-48, Vi45-51, CRIVi45-51, or the oligopeptides 1-15, 12-25, 20-35, 30-45, or 35-48. Once the clot was formed (∼20 minutes and maximum absorbance), treatments were added to a final concentration per well of 24% v/v plasma, 10 mM CaCl 2 , 60 pM human tissue plasminogen activator (tPA), 0.05 μM PAI-1, and 0.3 μM Vi1-123, Vi1-48, Vi45-51, CRIVi45-51, or the oligopeptides 1-15, 12-25, 20-35, 30-45, or 35-48. Absorbance (405 nm) was measured every 5 minutes to monitor clot lysis.
PAI-1 Binding Assay
A 96-well ELISA microplate was coated overnight at 4 °C with 50 μL of 6.25 μM PRL, Vi1-123, Vi1-48, Vi45-51, CRIVi45-51, or the oligopeptides 1-15, 12-25, 20-35, 30-45, or 35-48, diluted in PBS. Microplate was blocked for 1 hour at room temperature with 5% w/v nonfat dry milk in 0.1% Tween-20-PBS (PBST), followed by 3 washes with PBST. Next, 100 nM of PAI-1 diluted in 0.2 mg mL −1 bovine serum albumin (BSA)-PBST was added and incubated for 1 hour at room temperature, followed by a 3-wash step with PBST. Anti-PAI-1 antibodies (1 μg mL −1 diluted in blocking buffer) were added and incubated for 1 hour at room temperature. Microplates were then washed 3 times with PBST, and goat anti-rabbit HRP antibody (Jackson ImmunoResearch Labs, West Grove, PA, Cat# 111-035-144, RRID:AB_2307391) at 1:2500 (diluted in 50% blocking buffer and 50% PBS) added and incubated for 1 hour at room temperature. Three last washes were done with PBST and microplates incubated for 30 minutes under darkness with 100 μL per well of an o-phenylenediamine dihydrochloride (OPD) substrate tablet diluted in 0.03% H 2 O 2 citrate buffer (pH 5). Finally, the reaction was stopped with 50 μL of 3 M HCl, and absorbance measured at 490 nm.
NF-κB Nuclear Translocation Assay
HUVEC were seeded on 1 μg cm −2 fibronectin-coated 18 mm-coverslips placed in 12-well plates and grown in complete media to 80% confluence. Then, cells were treated, under starving conditions (0.5% FBS F12K), with 100 nM PRL, Vi1-123, Vi1-48, Vi45-51, CRIVi45-51, or the oligopeptides 20-35 or 30-45. After 30 minutes, cells were washed with PBS, fixed with 4% of paraformaldehyde (30 minutes), permeabilized with 0.5% Triton-X (Tx)-100 in PBS (30 minutes), blocked with 5% normal goat serum, 1% BSA, 0.05% Tx-100 in PBS (1 hour), and incubated with 1:200 anti-NF-κB p65 antibodies (Santa Cruz Biotechnology, Santa Cruz, CA, Cat# sc-8008, RRID:AB_628017) in 1% BSA, 0.1% Tx-100 PBS overnight in a humidity chamber at 4 °C. HUVEC were washed and incubated with 1:500 goat anti-mouse secondary antibodies coupled to Alexa fluor 488 (Abcam, Cambridge, UK, Cat# ab150113, RRID:AB_2576208) in 1% BSA, 0.1% Tx-100 PBS (2 hours in darkness). Nuclei were counterstained with 5 μg mL −1 Hoechst 33342 (Sigma-Aldrich). Coverslips were mounted with Vectashield (Vector Laboratories, Burlingame, CA) and digitalized under fluorescence microscopy (Olympus IX51).
Quantitative Polymerase Chain Reaction of HUVECs
HUVEC at 80% confluency in 6-well plates under starving conditions (0.5% FBS F12K) were treated for 4 hours with 100 nM PRL, Vi1-123, Vi1-48, Vi45-51, CRIVi45-51, or the oligopeptides 1-15, 12-25, 20-35, 30-45, or 35-48. RNA was isolated using TRIzol (Invitrogen) and retrotranscribed with the high-capacity cDNA reverse transcription kit (Applied Biosystems). Polymerase chain reaction (PCR) products were obtained and quantified using Maxima SYBR Green qPCR Master Mix (Thermo Fisher Scientific) in a final reaction containing 20 ng of cDNA and 0.5 μM of each of the following primer pairs for human genes: ICAM1 (5′-GTGACCGTGAATGTGCTCTC-3′ and 5′-CCTGCAGTGCCCATTATGAC-3′), VCAM1 (5′-GCACTGGGTTGACTTTCAGG-3′ and 5′-AACATCTCCGTACCATGCCA-3′), IL1A (5′-ACTGCCCAAGATGAAGACCA-3′ and 5′-TTAGTGCCGTGAGTTTCCCA-3′), IL1B (5′-GGAGAATGACCTGAGCACCT-3′ and 5′GGAGGTGGAGAGCTTTCAGT-3′), IL6 (5′-CCTGATCCAGTTCCTGCAGA-3′ and 5′-CTACATTTGCCGAAGAGCCC-3′), TNF (5′-ACCACTTCGAAACCTGGGAT-3′ and 5′-TCTTCTCAAGTCCTGCAGCA-3′) were quantified relative to GAPDH (5′-GAAGGTCGGAGTCAACGGATT-3′ and 5′-TGACGGTGCCATGGAATTTG-3′). Amplification consisted of 40 cycles of 10 seconds at 95 °C, 30 seconds at the annealing temperature of each primer pair, and 30 seconds at 72 °C. The mRNA expression levels were calculated by the 2 −ΔΔCT method.
Animals
C57BL6 mice were housed under standard laboratory conditions. Experiments were approved by the Bioethics Research Committee of the Institute of Neurobiology of the National University of Mexico (UNAM) in compliance with the US National Research Council’s Guide for the Care and Use of Laboratory Animals (8th ed, National Academy Press, Washington, DC).
In Vivo Vascular Inflammation
Vascular inflammation was evaluated as previously reported ( 22 ). Briefly, female C57BL6 mice (8 weeks old) were injected intravenously with 16.6 μg of Vi45-51 or 40.7 μg of 30-45 in 50 μL of PBS to achieve ∼10 μM in serum. Controls were injected intravenously with 50 μL PBS. After 2 hours, animals were euthanized by cervical dislocation and perfused intracardially with PBS. A fragment of the lungs, liver, kidneys, and whole eyes were dissected and placed immediately in TRIzol reagent and retrotranscribed. The expression of Icam1 , Vcam1 , Il1b , Il6 , and Tnf were quantified relative to Gapdh by quantitative PCR as indicated for HUVEC using the following primer pairs for the mouse genes: Icam1 (5′-GCTGGGATTCACCTCAAGAA-3′ and 5′-TGGGGACACCTTTTAGCATC-3′), Vcam1 (5′-ATTGGGAGAGACAAAGCAGA-3′ and 5′-GAAAAAGAAGGGGAGTCACA-3′), Cd45 (5′-TATCGCGGTGTAAAACTCGTCA-3′ and 5′-GCTCAGGCCAAGAGACTAACGT-3′), Il1b (5′-GTTGATTCAAGGGGACATTA-3′ and 5′-AGCTTCAATGAAAGACCTCA-3′), Il6 (5′-GAGGATACCACTCCCAACAGACC-3′ and 5′-AAGTGCATCATCGTTGTTCATACA-3′), and Tnf (5′-CATCTTCTCAAAATTCGAGTGACAA-3′ and 5′-TGGGAGTAGACAAGGTACAACCC-3′).
Joint Inflammation
Male C57BL/6 mice (8 weeks old) were injected into the articular space of knee joints with vehicle (saline), or 87 pmol of Vi45-51 (72 ng) or 30-45 (176.8 ng) in a final volume of 10 μL saline. Twenty-four hours after injections, animals were euthanized in a CO 2 -saturated atmosphere. Joints were extracted, pulverized with nitrogen, RNA extracted, retrotranscribed, and the expressions of mouse Il1b , Il6 , and Inos (5′-CAGCTGGGCTGTACAAACCTT-3′ and 5′-CATTGGAAGTGAAGCGTTTCG-3′) were quantified relative to Gapdh by quantitative PCR as described above. | Results
Antiangiogenic HGR-Containing Vasoinhibin Analogues Are Not Apoptotic, Inflammatory, or Fibrinolytic
The linear– (Vi45-51) and cyclic retro-inverse– (CRIVi45-51) HGR-containing vasoinhibin analogues, like vasoinhibin standards of 123 residues (Vi1-123) and 48-residues (Vi1-48) ( Fig. 1A ), inhibited the VEGF- and bFGF-induced proliferation of HUVEC ( Fig. 1B ) and the VEGF-induced invasion of HUVEC ( Fig. 1C ) without affecting the basal levels. These results confirm the antagonistic properties of HGR-containing vasoinhibin analogues ( 14 ) and serve to validate their use to explore other vasoinhibin actions. PRL is not antiangiogenic ( 10 ) and was used as a negative control.
Contrary to the 2 vasoinhibin isoforms (Vi1-123 and Vi1-48), the HGR-containing vasoinhibin analogues failed to induce the apoptosis and inflammatory phenotype of HUVEC as well as the lysis of a fibrin clot ( Fig. 1D-1G ). Vi1-123 and Vi1-48, but not Vi45-51, CRIVi45-51, or PRL, stimulated the apoptosis of HUVEC revealed by DNA fragmentation measured by ELISA ( Fig. 1D ), the adhesion of peripheral blood leukocytes to HUVEC monolayers ( Fig. 1E ), and the in vitro lysis of a plasma clot ( Fig. 1F and 1G ). Once the clot is formed (time 0), adding the thrombolytic agent tPA stimulates clot lysis, an action prevented by the coaddition of PAI-1. PAI-1 inhibition was reduced by Vi1-123 and Vi1-48 but not by Vi45-51, CRIVi45-51, or PRL ( Fig. 1F and 1G ). Because binding to PAI-1 mediates the fibrinolytic properties of vasoinhibin ( 23 ), the binding capacity to PAI-1 was evaluated by adding PAI-1 to ELISA plates coated with or without PRL, Vi1-123, Vi1-48, Vi45-51, or CRIVi45-51. The absorbance of the HRP-labeled antibody-PAI-1 complex increased only in the presence of Vi1-123 and Vi1-48 but not in uncoated wells and wells coated with the 2 HGR-containing vasoinhibin analogues or with PRL ( Fig. 1H ).
These findings show that the HGR-containing vasoinhibin analogues lack the apoptotic, inflammatory, and fibrinolytic properties of vasoinhibin. The fact that PRL is not inflammatory, apoptotic, or fibrinolytic indicates that, like the antiangiogenic effect ( 14 ), these vasoinhibin properties emerge upon PRL cleavage.
HGR-Containing Vasoinhibin Analogues Do Not Stimulate the Nuclear Translocation of NF-κB and the Expression of Inflammatory Molecules in HUVEC
Because vasoinhibin signals through NF-κB to induce the apoptosis and inflammation of endothelial cells ( 11 , 12 ), we asked whether HGR-containing vasoinhibin analogues were able to promote the nuclear translocation of NF-κB and the expression of proinflammatory mediators in HUVEC ( Fig. 2 ). The distribution of NF-κB in HUVEC was studied using fluorescence immunocytochemistry and monoclonal antibodies against the p65 subunit of NF-κB ( Fig. 2A ). Without treatment, p65 was homogeneously distributed throughout the cytoplasm of cells. Treatment with Vi1-123 or Vi1-48, but not with Vi45-51, CRIVi45-51, or PRL, resulted in the accumulation of p65 positive stain in the cell nucleus ( Fig. 2A ) indicative of the NF-κB nuclear translocation/activation needed for transcription. Consistently, only vasoinhibin isoforms (Vi1-123 or Vi1-48) and not the HGR-containing vasoinhibin analogues nor PRL induced the mRNA expression of genes encoding leukocyte adhesion molecules (intercellular adhesion molecule 1 [ ICAM1 ] and vascular cell adhesion molecule 1 [ VCAM1 ]) and proinflammatory cytokines (IL-1α ( IL1A ), IL-1β ( IL1B ), IL-6 ( IL6 ), and tumor necrosis factor α [ TNF ]) in HUVEC ( Fig. 2B ). These findings show that HGR-containing vasoinhibin analogues are unable to activate NF-κB to promote gene transcription, resulting in the apoptosis and inflammation of HUVEC. Furthermore, these results suggest that a structural determinant—different from the HGR motif—is responsible for these properties.
Oligopeptides Containing the HNLSSEM Vasoinhibin Sequence Are Inflammatory, Apoptotic, and Fibrinolytic
Because the vasoinhibin of 48 residues (Vi1-48) conserves the apoptotic, inflammatory, and fibrinolytic properties of the larger vasoinhibin isoform (Vi1-123) ( 15 ), we scanned the sequence of the 48-residue isoform with synthetic oligopeptides ( Fig. 3A ) for their ability to stimulate the apoptosis and inflammation of HUVEC and the lysis of a fibrin clot. First, we confirmed that only the oligopeptide containing the HGR motif (35-48) inhibited the proliferation and invasion of HUVEC, whereas the oligopeptides lacking the HGR motif were not antiangiogenic ( Fig. 3B and 3C ).
Only the oligopeptides 20-35 and 30-45 promoted the apoptosis of HUVEC ( Fig. 3D ) and the leukocyte adhesion to HUVEC monolayers ( Fig. 3E and 3F ) like Vi1-123 and Vi1-48. The estimated potency (EC 50 ) of these oligopeptides was 800 pM, with a significantly higher effectiveness for the 30-45 oligopeptide ( Fig. 3F ). Likewise, the 20-35 and 30-45 oligopeptides, but not oligopeptides 1-15, 12-25, or 35-48, exhibited fibrinolytic properties ( Fig. 3G and 3H ) and bound PAI-1 like Vi1-123 and Vi1-48 ( Fig. 3I ). The shared sequence between the 20-35 and 30-45 oligopeptides corresponds to His30-Asn31-Leu32-Ser33-Ser34-Glu35 (HNLSSE) ( Fig. 3A ). However, the significantly higher effect of 30-45 over 20-35 in apoptosis, inflammation, fibrinolysis, and PAI-1 binding, suggests that the Met36 could be a part of the apoptotic, inflammatory, and fibrinolytic linear determinant of vasoinhibin (HNLSSEM).
Oligopeptides Containing the HNLSSEM Vasoinhibin Sequence Stimulate the Nuclear Translocation of NF-κB and the Expression of Inflammatory Factors in HUVEC
Consistent with their apoptotic and inflammatory effects, the 20-35 and the 30-45 oligopeptides, like vasoinhibin (Vi1-123 and Vi1-48), induced the nuclear translocation of NF-κB ( Fig. 4A ) and upregulated the mRNA expression levels of the leukocyte adhesion molecules ( ICAM1 and VCAM1 ) and inflammatory cytokines ( IL1A , IL1B , IL6 , and TNF ) genes in HUVEC ( Fig. 4B ).
In Vivo Inflammation Is Stimulated by the HNLSSEM Sequence and Not by the HGR Motif
To evaluate whether the HGR or the HNLSSEM motifs promotes the inflammatory phenotype of endothelial cells in vivo, the HGR-containing vasoinhibin analogue Vi45-51 or the HNLSSEM-containing 30-45 oligopeptide was injected intravenously to reach an estimated ≃10 μM concentration in serum, and after 2 hours, mice were perfused, and lung, liver, kidney, and eyes were collected to evaluate mRNA expression of leukocyte adhesion molecules ( Icam1 and Vcam1 ) and cytokines ( Il1b , Il6 , and Tnf ), and the level of leukocyte marker ( Cd45 ). The underlying rationale is that intravenous delivery and short-term (2-hour) analysis in thoroughly perfused animals would reflect a direct effect of the treatments on endothelial cell mRNA expression of inflammatory factors in the various tissues. The 30-45 peptide, but not the Vi45-51, increased the expression levels of these inflammatory markers in the evaluated tissues ( Fig. 5A-5D ). Furthermore, because vasoinhibin is inflammatory in joint tissues ( 24 ), we injected into the knee cavity of mice 87 pmol of the Vi45-51 or the 30-45 peptide, and after 24 hours, only the 30-45 oligopeptide induced the mRNA expression of Il1b , Il6 , and inducible nitric oxide synthetase ( Inos ) ( Fig. 5E ). The finding in joints implied that, like vasoinhibin, the inflammatory effect of the 30-45 peptide extends to other vasoinhibin target cells, that is, synovial fibroblasts ( 24 ).
PAI-1, uPAR, and NF-κB Mediate the Apoptotic and Inflammatory Effects of the HNLSSEM Vasoinhibin Determinant
Vasoinhibin binds to a multimeric complex in endothelial cell membranes formed by PAI-1, uPA, and uPA receptor (uPAR) (PAI-1-uPA-uPAR) ( 23 ), but it is unclear whether such binding influences vasoinhibin-induced activation of NF-κB, the main signaling pathway mediating its apoptotic and inflammatory actions ( 11 , 12 , 25 ). Because the HNLSSEM determinant in vasoinhibin binds to PAI-1, activates NF-κB signaling, and stimulates the apoptosis and inflammation of HUVEC, we investigated their functional interconnection by testing whether inhibitors of PAI-1, uPAR, or NF-κB modified the apoptosis and leukocyte adhesion to HUVEC treated with Vi1-123 and the oligopeptides 20-35 and 30-45 ( Fig. 6 ). Antibodies against uPAR and the inhibitor of NF-κB (BAY117085), but not the immunoneutralization of PAI-1, prevented the apoptotic effect of vasoinhibin and the HNLSSE-containing oligopeptides ( Fig. 6A ). In contrast, all 3 inhibitors prevented the adhesion of leukocytes to HUVEC in response to Vi1-123, 20-35, and 30-45 ( Fig. 6B ). These results indicate that vasoinhibin, through the HNLSSEM motif, uses PAI-1, uPAR, and/or NF-κB to mediate endothelial cell apoptosis and inflammation ( Fig. 6C ). | Discussion
Vasoinhibin represents a family of proteins comprising the first 48 to 159 amino acids of PRL, depending on the cleavage site of several proteases, including matrix metalloproteases ( 26 ), cathepsin D ( 27 ), bone morphogenetic protein 1 ( 28 ), thrombin ( 15 ), and plasmin ( 29 ). The cleavage of PRL occurs at the hypothalamus, the pituitary gland, and the target tissue levels, defining the PRL/vasoinhibin axis ( 30 ). This axis contributes to the physiological restriction of blood vessels in ocular ( 31 , 32 ) and joint ( 26 ) tissues and is disrupted in angiogenesis-related diseases, including diabetic retinopathy ( 33 ), retinopathy of prematurity ( 34 ), peripartum cardiomyopathy ( 35 ), preeclampsia ( 36 ), and inflammatory arthritis ( 37 ). Furthermore, 2 clinical trials have addressed vasoinhibin levels as targets of therapeutic interventions ( 38 ). However, the clinical translation of vasoinhibin is limited by difficulties in its production ( 39 ). These difficulties were recently overcome by the development of HGR-containing vasoinhibin analogues that are easy to produce, as well as potent, stable, and even orally active to inhibit the growth and permeability of blood vessels in experimental vasoproliferative retinopathies and cancer ( 14 ). Nonetheless, the therapeutic value of HGR analogues is challenged by evidence showing that vasoinhibin is also apoptotic, inflammatory, and fibrinolytic, properties that may worsen microvascular diseases ( 40 , 41 ). Here, we show that the various functions of vasoinhibin are segregated into 2 distinct, nonadjacent, and independent small linear motifs: the HGR motif responsible for the vasoinhibin inhibition of angiogenesis and vasopermeability ( 14 ) and the HNLSSEM motif responsible for the apoptotic, inflammatory, and fibrinolytic properties of vasoinhibin ( Fig. 7A ).
The HGR and HNLSSEM motifs are inactive in PRL, the vasoinhibin precursor. We confirmed that PRL has no antiangiogenic properties ( 10 ) and showed that PRL lacks apoptotic and inflammatory actions on endothelial cells as well as no fibrinolytic activity. PRL has 199 amino acids structured into a 4-⍺-helix bundle topology connected by 3 loops ( 43 ). The HGR motif is in the first part of loop 1 (L1) connecting ⍺-helixes 1 and 2, whereas the HNLSSEM motif is in ⍺-helix 1 (H1) ( Fig. 7A ). Upon proteolytic cleavage, PRL loses its fourth ⍺-helix (H4), which drives a conformational change and the exposure of the HGR motif, obscured by H4 ( 14 , 42 ). Since H1 and H4 are in close contact in PRL ( 43 ), it is likely that some elements of H4 also mask the HNLSSEM motif. Alternatively, it is also possible that residues of the HNLSSEM motif buried in the hydrophobic core of PRL become solvent exposed by the conformational change into vasoinhibin. However, this is unlikely since the hydrophobic core appears conserved during vasoinhibin generation ( 42 ).
A previous report indicated that binding to PAI-1 mediates the antiangiogenic actions of vasoinhibin ( 23 ). Contrary to this claim, antiangiogenic HGR-containing vasoinhibin analogues did not bind PAI-1, whereas the HNLSSEM-oligopeptides bound PAI-1 but did not inhibit HUVEC proliferation and invasion. While these findings unveil the structural determinants in vasoinhibin responsible for PAI-1 binding, they question the role of PAI-1 as a necessary element for the antiangiogenic effects of vasoinhibin. Little is known of the molecular mechanism by which vasoinhibin binding to the PAI-1-uPA-uPAR complex inhibits endothelial cells ( 23 ). Although the binding could help localize vasoinhibin on the surface of endothelial cells, the contribution of other vasoinhibin-binding proteins and/or interacting molecules cannot be excluded. For example, integrin ⍺5β1 interacts with the uPA-uPAR complex ( 44 ), and vasoinhibin binds to ⍺5β1 to promote endothelial cell apoptosis ( 45 ). Nevertheless, none of the HGR-containing analogues induced apoptosis. Therefore, the binding molecule/receptor that transduces the antiangiogenic properties of vasoinhibin remains unclear.
Vasoinhibin is commonly described as antiangiogenic due to its ability to inhibit endothelial cell proliferation, migration, and survival. However, the proapoptotic effect of vasoinhibin can occur independent of its antiangiogenic action. For example, vasoinhibin contributes to the physiological regression by apoptosis of the stable hyaloid vasculature, a transient network of intraocular vessels that nourishes the immature lens, retina, and vitreous ( 46 ). Moreover, despite lacking proapoptotic properties, Vi45-51 inhibits the growth of melanoma tumors ( 14 ) like whole vasoinhibin ( 23 , 47-49 ), to suggest that the apoptotic effect of vasoinhibin may be irrelevant to its overall antiangiogenic efficacy.
Consistent with previous reports ( 11 , 12 , 23 ), vasoinhibin binding to PAI-1 and activation of uPAR and NF-κB did associate with the apoptotic, inflammatory, and fibrinolytic properties of the HNLSSEM-containing oligopeptides. These oligopeptides, but not HGR-containing oligopeptides, induced endothelial cell apoptosis, nuclear translocation of NFκB, expression of leukocyte adhesion molecules and proinflammatory cytokines, and adhesion of leukocytes, as well as the lysis of plasma fibrin clot. The inflammatory action, but not the apoptotic effect, was prevented by PAI-1 immunoneutralization, whereas both inflammatory and apoptotic actions were blocked by anti-uPAR antibodies or by an inhibitor of NF-κB. Locating the apoptotic, inflammatory, and fibrinolytic activity in the same short linear motif of vasoinhibin is not unexpected since the 3 events can be functionally linked. The degradation of a blood clot is an important aspect of inflammatory responses, and major components of the fibrinolytic system are regulated by inflammatory mediators ( 50 ). Examples of such interactions are the thrombin-induced generation of vasoinhibin during plasma coagulation to promote fibrinolysis ( 15 ), the endotoxin-induced IL-1 production inhibited by PAI-1 ( 51 ), and the TNFα-induced suppression of fibrinolytic activity due to the activation of NFκB-mediated PAI-1 expression ( 52 ). Furthermore, uPA is upregulated by thrombin and inflammatory mediators in endothelial cells ( 53 ), and uPAR is elevated under inflammatory conditions ( 54 ). On the other hand, the identification of HNLSSEM as the motif responsible for the binding of vasoinhibin to PAI-1 raises the possibility that such motif contributes to the interaction of other proteins with PAI-1, like vitronectin, uPA, and tPA ( 55 ). Our preliminary analyses found no identical HNLSSEM sequences in these proteins but detected similar motifs in regions shown to be irrelevant for binding to PAI-1 that merit further research.
The HNLSSEM-oligopeptides’ inflammatory action is further supported by their in vivo administration. The intravenous injection of HNLSSEM-oligopeptides upregulated the short-term (2 hours postinjection) expression of Icam1 and Vcam1 , IlIb , Il6 , and Tnf, and the infiltration of leukocytes (evaluated by the expression levels of the leukocyte marker Cd45 ) in different tissues indicative of an inflammatory action on different vascular beds. Also, the HNLSSEM-oligopeptides injected into the intra-articular space of joints launched a longer-term inflammation (24 hours postinjection) indicative of an inflammatory response in joint tissues. This action is consistent with the vasoinhibin-induced stimulation of the inflammatory response of synovial fibroblasts, primary effectors of inflammation in arthritis ( 37 ).
The challenge is to understand when and how vasoinhibin impacts angiogenesis, apoptosis, inflammation, and fibrinolysis pathways under health and disease. One likely example is during the physiological repair of tissues after wounding and inflammation. By inhibiting angiogenesis, vasoinhibin could help counteract the proangiogenic action of growth factors and cytokines, whereas by stimulating apoptosis, inflammation, and fibrinolysis, vasoinhibin could promote the pruning of blood vessels, protective inflammatory reactions, and clot dissolution needed for tissue remodeling. Indeed, the antiangiogenic effect of vasoinhibin can be accompanied by its inflammatory actions. Although, the HNLSSEM motif has 5.3 times less potency than the HGR motif ( Fig. 7B ). However, in the absence of successful containment, overproduction of blood vessels, persistent inflammation, and dysfunctional coagulation determines the progression and therapeutic outcomes in cancer ( 41 , 56 ), diabetic retinopathy ( 57 ), and rheumatoid arthritis ( 58 ). The complexity of vasoinhibin actions under disease is exemplified in murine antigen-induced arthritis, where vasoinhibin ameliorates pannus formation and growth via an antiangiogenic mechanism but promotes joint inflammation by stimulating the inflammatory response of synovial fibroblasts ( 37 , 59 ).
Antiangiogenic drugs, in particular VEGF inhibitors, have reached broad usage in the field of cancer and retinopathy, albeit with partial success and safety concerns ( 6 , 60 , 61 ). They display modest efficacy and survival times, resistance, and mild to severe side effects that include infections, bleeding, wound healing complications, and thrombotic events. Toxicities illustrate the association between the inhibition of blood vessel growth and multifactorial pathways influencing endothelial cell apoptosis, inflammation, and coagulation ( 6 ). The fact that the HGR analogues lack the apoptotic, inflammatory, and fibrinolytic properties of vasoinhibin highlights their future as potent and safe inhibitors of blood vessel growth, avoiding drug resistance through their broad action against different proangiogenic substances.
In summary, this work segregates the activities of vasoinhibin into 2 linear determinants and provides clear evidence that the HNLSSEM motif is responsible for binding to PAI-1 and exerting apoptotic, inflammatory, and fibrinolytic actions via PAI-1, uPAR, and NF-κB pathways, while the HGR motif is responsible for the antiangiogenic effects of vasoinhibin. This knowledge provides tools for dissecting the differential effects and signaling mechanisms of vasoinhibin under health and disease and for improving its development into more specific, potent, and less toxic antiangiogenic, proinflammatory, and fibrinolytic drugs. | Juan Pablo Robles and Magdalena Zamora equal contribution
Abstract
Vasoinhibin, a proteolytic fragment of the hormone prolactin, inhibits blood vessel growth (angiogenesis) and permeability, stimulates the apoptosis and inflammation of endothelial cells, and promotes fibrinolysis. The antiangiogenic and antivasopermeability properties of vasoinhibin were recently traced to the HGR motif located in residues 46 to 48 (H46-G47-R48), allowing the development of potent, orally active, HGR-containing vasoinhibin analogues for therapeutic use against angiogenesis-dependent diseases. However, whether the HGR motif is also responsible for the apoptotic, inflammatory, and fibrinolytic properties of vasoinhibin has not been addressed. Here, we report that HGR-containing analogues are devoid of these properties. Instead, the incubation of human umbilical vein endothelial cells with oligopeptides containing the sequence HNLSSEM, corresponding to residues 30 to 36 of vasoinhibin, induced apoptosis, nuclear translocation of NF-κB, expression of genes encoding leukocyte adhesion molecules ( VCAM1 and ICAM1 ) and proinflammatory cytokines ( IL1B, IL6, and TNF ), and adhesion of peripheral blood leukocytes. Also, intravenous or intra-articular injection of HNLSSEM-containing oligopeptides induced the expression of Vcam1, Icam1, Il1b, Il6, and Tnf in the lung, liver, kidney, eye, and joints of mice and, like vasoinhibin, these oligopeptides promoted the lysis of plasma fibrin clots by binding to plasminogen activator inhibitor-1 (PAI-1). Moreover, the inhibition of PAI-1, urokinase plasminogen activator receptor, or NF-κB prevented the apoptotic and inflammatory actions. In conclusion, the functional properties of vasoinhibin are segregated into 2 different structural determinants. Because apoptotic, inflammatory, and fibrinolytic actions may be undesirable for antiangiogenic therapy, HGR-containing vasoinhibin analogues stand as selective and safe agents for targeting pathological angiogenesis. | The formation of new blood vessels (angiogenesis) underlies the growth and repair of tissues and, when exacerbated, contributes to multiple diseases, including cancer, vasoproliferative retinopathies, and rheumatoid arthritis ( 1 ). Antiangiogenic therapies based on tyrosine kinase inhibitors ( 2 , 3 ) and monoclonal antibodies against vascular endothelial growth factor (VEGF) or its receptor ( 4 ) have proven beneficial for the treatment of cancer and retinal vasoproliferative diseases ( 5 ). However, disadvantages such as toxicity ( 6-8 ) and resistance ( 9 ) have incentivized the development of new treatments.
Vasoinhibin is a proteolytically generated fragment of the hormone prolactin that inhibits endothelial cell proliferation, migration, permeability, and survival ( 10 ). It binds to a multi-component complex formed by plasminogen activator inhibitor-1 (PAI-1), urokinase plasminogen activator (uPA), and the uPA receptor on endothelial cell membranes, which can contribute to the inhibition of multiple signaling pathways (Ras-Raf-MAPK, Ras-Tiam1-Rac1-Pak1, PI3K-Akt, and PLCγ-IP 3 -eNOS) activated by several proangiogenic and vasopermeability factors (VEGF, basic fibroblast growth factor [bFGF], bradykinin, and interleukin [IL]-1β) ( 10 ). Moreover, vasoinhibin, by itself, activates the NF-κB pathway in endothelial cells to stimulate apoptosis ( 11 ) and trigger the expression of inflammatory factors and adhesion molecules, resulting in leukocyte infiltration ( 12 ). Finally, vasoinhibin promotes the lysis of a fibrin clot by binding to PAI-1 and inhibiting its antifibrinolytic activity ( 13 ).
The antiangiogenic determinant of vasoinhibin was recently traced to a short linear motif of just 3 amino acids (His46-Gly47-Arg48) (the HGR motif), which led to the development of heptapeptides comprising residues 45 to 51 of vasoinhibin that inhibited angiogenesis and vasopermeability with the same potency as whole vasoinhibin ( 14 ) ( Fig. 1A ). The linear vasoinhibin analogue (Vi45-51) was then optimized into a fully potent, proteolysis-resistant, orally active cyclic retro-inverse heptapeptide (CRIVi45-51) ( Fig. 1A ) for the treatment of angiogenesis-dependent diseases ( 14 ). Notably, thrombin generates a vasoinhibin of 48 amino acids (Vi1-48) that contains the HGR motif ( Fig. 1A ). Vi1-48 is antiangiogenic and fibrinolytic ( 15 ), suggesting that the HGR motif could also be responsible for the apoptotic, inflammatory, and fibrinolytic properties of vasoinhibin. This possibility needed to be analyzed to support the therapeutic future of the HGR-containing vasoinhibin analogues as selective and safe inhibitors of blood vessel growth and permeability. Moreover, the identification of specific functional domains within the vasoinhibin molecule provides insights and tools for understanding its overlapping roles in angiogenesis, inflammation, and coagulation under health and disease. | Acknowledgments
We thank Xarubet Ruíz Herrera, Fernando López Barrera, Adriana González Gallardo, Alejandra Castilla León, José Martín García Servín, and María A. Carbajo Mata for their excellent technical assistance.
Funding
The work was supported by grant A1-S-9620B from Consejo Nacional de Humanidades, Ciencias y Tecnologias (CONAHCYT) and grant SECTEI/061/2023 from Secretaría de Educación, Ciencia, Tecnología e Innovación de la Ciudad de México (SECTEI) to C.C. and CF-2023-I-113 grant from CONAHCYT to G.M.E.
Disclosures
The authors declare the following broadly competing interests: J.P.R., M.Z., T.B., G.M.E., J.T., and C.C. are inventors of a submitted patent application (WO/2021/098996). The Universidad Nacional Autónoma de México (UNAM) and the authors J.T. and T.B. are owners of the pending patent. J.P.R. is the CEO and Founder of VIAN Therapeutics, Inc.
Data Availability
Original data generated and analyzed during this study are included in this published article or in the data repositories listed in References.
Abbreviations
basic fibroblast growth factor
bovine serum albumin
cyclic retro-inverse-vasoinhibin analogue
endothelial cell growth supplement
enzyme-linked immunosorbent assay
fetal bovine serum
human umbilical vein endothelial cells
intercellular adhesion molecule 1
interleukin
plasminogen activator inhibitor-1 PBS phosphate-buffered saline
phosphate buffered saline with 0.1% Tween-20
polymerase chain reaction
prolactin
tumor necrosis factor
tissue plasminogen activator
urokinase plasminogen activator
urokinase plasminogen activator receptor
vascular cell adhesion molecule 1
vascular endothelial growth factor
123-residue vasoinhibin (1-123)
48-residue vasoinhibin (1-48)
linear vasoinhibin analogue (45-51) | CC BY | no | 2024-01-16 23:35:07 | Endocrinology. 2023 Dec 6; 165(2):bqad185 | oa_package/0c/fb/PMC10748576.tar.gz |
||
PMC10754748 | 38006481 | Introduction
Green Chemistry is mainly about how to protect the environment and ourselves from the adverse effects of our own chemicals and how to convert harmful materials to more benign ones. Slurry is once thought to be an inherently safe and benign source of nutrients. Due to the intensifying technologies of modern agriculture (such as the increasing use of oestrus -inducer hormonal products, other pharmaceuticals), however, slurry ends up in the environment as a carrier of endocrine-disrupting chemicals (EDCs), causing growing concerns (Li et al. 2020 ). These micropollutants (MPs) can be harmful to living creatures even at extremely low concentrations (1 ng/kg) (Grover et al. 2011 ; Fuhrman et al. 2015 ) ranges). Using manure or slurry on agricultural fields is not only a traditional method, but also a part of the so-called circular economy. Due to this practice, however, steroidal estrogens (SE) can easily penetrate the soil and the groundwater (Zitnick et al. 2011 ), and can be taken up by plants and accumulated in their different parts (Erdal and Dumlupinar 2011 ; Gworek et al. 2021 ). Being already in the food chain, they can potentially cause adverse effects (e.g. fertility problems, feminising effects, cancer) on wildlife, cultivated livestock and human health alike (Grover et al. 2011 ; Fuhrman et al. 2015 ). The level of SE content can reach the hundreds, even up to the thousands of ng/kg magnitude (Zhang et al. 2015 ; Gudda et al. 2022 ). The estrogenic effect of 17ß-estradiol (17ß-E2) is 20,000 times higher than that of bisphenol-A (BPA), a widely known EDC of industrial origin (Coldham et al. 1997 ).
The main objective of this research was to study the environmental “price” of large-scale, continuous milk production from a rarely known perspective, i.e. mapping the estrogenic footprint (the amount of oestrus- inducer hormonal products, and that of the generated endoestrogens) in the slurry, produced at a dairy cow farm. To our knowledge, the detectability and the decomposition of oestrus -inducer hormonal products in slurry during four consecutive years have not yet been studied. The present study investigates the fate of 5 oestrus -inducing veterinary products used in a real dairy cow farm in Hungary. We examined the use of these OIVPs from 2017 to 2020 and tested their estrogenic activities as well.
For testing the estrogenic effects, we applied a dual approach: for testing exact molecules, an ultra-high-performance liquid chromatography (UHPLC-FLD) method was used. However, we performed a methodology development and validation to have a reliable effect-based method (EBM) for more holistic results and the Yeast Estrogen Screen (YES) test was our main research method (adapted from ISO 19040 standard). It employs the genetically modified Saccharomyces cerevisiae BJ3505 strain, which contains human estrogenic receptor. The YES test is largely able to overcome the difficulties caused by complex biological matrices and the chemical diversity of the molecules to be tested (Jobling et al. 2009 ).
A compound can be considered a material with estrogenic effect if it can bind to the estrogen receptors (ER) and is able to generate biological effects (Bittner et al. 2014 ). Numerous pollutants occur in the environment that can bind to those receptors (Arya et al. 2020 ). The simultaneous presence of antibiotics and estrogens means greater ecological risk than as if they were separate pollutants because antibiotics may increase the persistence of estrogens (He et al. 2019 ).
The appearance of SEs in the environment has attracted scientific interest worldwide. These studies mainly focused on SEs appearing in the aquatic environment or in wastewater treatment facilities (Arlos et al. 2016 ; Cerná et al. 2022 ) coming from urban areas or from animal husbandry (Du et al. 2020 ; Lin et al. 2020 ; Zhong et al. 2021 ). According to He et al. ( 2019 ), as much as 90% of ambiently appearing estrogens come from animal husbandry. The estimated total animal-borne estrogen emission of the European Union and the USA is 83,000 kg/ annum , more than twice as much a human emission (Shrestha et al. 2012 ; Laurenson et al. 2014 ). Johnson et al. ( 2006 ) estimated that a dairy cow releases 384 mg 17ß-estradiol (17ß-E2) into the environment with urine and faeces on a daily basis, while a sow in farrow excretes 700–17,000 mg estron (E1) daily.
Besides the estrogens produced by the living organisms themselves (endoestrogens), there are several types of organic and inorganic molecules that are able to recognise the ligand bonding domains of the estrogen receptors (Farooq 2015 ). Endoestrogens are synthesised by the genitals and other organs from cholesterol, such as estrone (E1), estradiol (17ß and 17α E2) and estriol (E3) (Farooq 2015 ), while xenoestrogens are synthetic compounds that have estrogenic effects, e.g. pharmaceuticals. | Materials and methods
Description of the cattle farm
The farm studied is situated near Budapest, Hungary. The livestock consists of 2500 animals on average, out of which 1600 are active dairy cows, while the rest are dry cows, heifers and calves. There are 1500 calvings/ annum on average. Manure is collected as slurry in a large storage pool. The average amount of slurry is 70,000 m 3 / annum . It is spread onto agricultural fields depending to the availability of the sites.
Propagation protocol of the farm
Reproduction of cattle is under the regulation of many hormones. Reproduction is directly regulated by the Gonadotropin-releasing hormone (GnRH) which is secreted from the hypothalamus ; estrogens secreted from the follicles, progesteron secreted from the corpus luteum and prostaglandin (F2α; PGF2) secreted from the endometrium in the inner part of the uterus . All the hormones listed have their own functions as well as influence on other hormones during the entire reproduction cycle of the cow (Sammad et al. 2019 ).
Managements often shift to targeted breeding programmes to simplify their lives. One of them is based on the application of PGF2α with which cows can be inseminated at any desired time, without the need for a subjective decision on the oestrus . For the synchronisation of the cycle and to influence the growth and the development of the follicle , hormonal (GnRH) products are used (Gábor et al. 2004 ; Ricci et al. 2020 ). The so-called OvSynch protocol was employed at the farm studied in 2017 and in the first half of 2018 as the base programme of the reproduction biology. In the second half of 2018, they introduced the double OvSynch protocol during which the animal goes through a second OvSynch protocol again with a 7-day delay and receives insemination only after that. In the latter case, everything is timely programmed, it simply makes the cow ready for insemination, but it needs precision (Yániz et al. 2004 ; Dirandeh et al. 2015 ; Nowicki et al. 2017 ).
Description of the oestrus -inducing veterinary pharmaceuticals.
Given that the lactation of dairy cows starts only after calving, successful insemination is inevitable for successful milk production throughout the year, distributed among the entire livestock. At large farms, it is programmed with intramuscularly applied hormonal injections. In our research, we examined the 5 different OIVPs (Table 1 ) that were actually used at the farm.
Taking and preparation of the slurry samples
Between 2017 and 2020, we regularly sampled the large slurry pool of 14,000 cubic metres on a quarterly basis. Every time, samples were taken using 4 disposable, sterile, polypropylene centrifuge vials of 50 ml each, which were free from DNAase, RNase, endotoxins and metals, could be frozen to − 80 °C and were resistant to chemicals. Glassware involved in the test were washed thoroughly as usual in the laboratory practice, than were rinsed with ethanol twice and dried at 120 °C for 2 h. We took the subsamples from each corner of the slurry pool, mixed, then stored them at 4 °C in a refrigerator and processed them within 1–3 days after sampling. The sample containing vials were centrifuged for 20 min at 4 °C and at 4200 rpm (Heraus Megafuge 40 R centrifuge), during which the liquid and the solid phases of the slurry were separated. After that, materials with estrogenic effect were extracted from the supernatant using solid phase extraction (SPE), and from the solid phase, after a separation method.
SPE is one of the best options and therefore often used to extract and concentrate analytes of interest from complex biological matrices. For our research, OASIS HLB 6 cc 200 mg 30 μm cartridge was used. First, we conditioned the column with 8 ml pure methanol and with 8 ml water: methanol 95:5 mixture. Secondly, we loaded the sample by adding 30 ml supernatant to the cartridge. Thirdly, we washed the impurities from the column using 10 ml water:methanol 1:1 mixture and water:acetone 2:1 mixture. After drying it for 1 min, we eluted the analytes with 5 ml pure methanol. The eluent contained the materials with estrogenic effect and was ready for the yeast test. We measured 10 μl from it into each well of the 96-well plate for the YES tests and 3 ml aliquots were preserved for the UHPLC analyses. The latter ones were frozen immediately to − 20 °C and were kept at that temperature in refrigerator.
From the solid phase of the slurry, we measured 2 g into beakers of 50 ml and added 10 ml pure methanol to each of them. After an ultrasonic treatment for 30 min at 30 °C (JEKEN PS 40A 10L) and a centrifuge stage at 2000 rpm, at 4 °C for 10 min, the supernatant was ready for the yeast test; therefore, we measured 10 μl into each well of the 96-well plate. Three millilitre aliquots were preserved for the UHPLC analyses. The latter ones were frozen to − 20 °C and were kept at that temperature in refrigerator.
Preparation of the OIVPs for the testing of estrogenic effect
We attempted to test the estrogenic effect of the pure OIVPs which were used at the dairy cow farm by performing the in vitro yeast assay on them. For this reason and to avoid contamination, an intact bottle from each of the 5 OIVPs was transported into the laboratory. From all 5 types, the following series of volumes were measured undiluted into the wells of the 96-well plate: 20 μl, 10 μl, 5 μl, 1 μl, 0.5 μl, 0.1 μl. Number of repetitions: 4.
Development, validation and implementation of the yeast test
The yeast test was developed from ISO 19040–1:2018 standard designed for measuring the estrogenic potentials of water, wastewater and sediment samples using Saccharomyces cerevisiae BJ3505 genetically modified yeast strain. The method was adapted to test our medicine and slurry samples. The Yeast Estrogen Screen (YES) test is a reporter gene analysis which serves to measure the activation of the human estrogen receptor-alpha (hERα) in the presence of compounds that generate estrogenic effect. If the yeast meets estrogenic molecules or homologue ones, it starts to produce the ß-D-galactosidase enzyme. The amount of the enzyme can be quantified by adding a yellow substrate, chlorophenolred-ß-D-galactopyranoside (CPRG), and measuring the resulted product of red colour at 580 nm with a spectrophotometer (Labsystems Multiskan MS) (Hong 2012 ).
On day 1, we started to breed the yeast in a breeding solution. With permanent stirring, it was kept at 30 ± 1 °C for 22 ± 1 h (incubator type: PLO-EKO Aparatura).
On day 2, we prepared the above-mentioned medicine and slurry (supernatant and sediment) samples and measured them into the 96-well plate, in four repetitions in each case. After drying them out, 80 μl of 0.3% etanol solution and 40 μl of yeast suspension were measured into the wells (the row of blanks served as negative control, getting the same treatment but without the testing organism, the yeast). The row of dilutions was kept in the above-mentioned incubator in the same conditions.
On day 3, samples were resuspended with pipettes and cell density was measured at 620 nm with a spectrophotometer to check whether and how the yeast grew homogeneously. Then, 30 μl was measured from each sample into a new plate; 50 μl Lac-Z reagent was added to each which contained CPRG reagent. After 1 h of incubation, we measured the colour changes at 580 nm wavelength (Purvis et al. 1991 ; Routledge and Sumpter 1996 ).
From the cell density measured at 620 nm and the colour changes measured at 580 nm, using Microsoft Excel and MyAssays Desktop softwares, we calculated the relative growth of the yeast, the average corrected absorbance, the inductive quocient, the limit of quantitation (LOQ), the limit of detection (LOD) and the lowest ineffective dilution (LID). A dose–response curve was established for a reference compound, 17β-estradiol, and this curve served as a benchmark for estrogenic activity. Statistical techniques were used to fit a curve to the experimental data points and the sigmoidal (S-shaped) curve, modeled using the four-parameter logistic function, was used for this purpose (Findlay and Dillard 2007 ; Hong 2012 ). Once the curve was fitted, it was used to estimate the estrogenic activity of the sample at specific concentrations. This interpolation allowed to determine if the sample exhibits estrogenic activity and used to identify whether a compound or sample surpasses a predefined estrogenic activity threshold. If the sample’s activity crossed the threshold (reached the linear stage of the dose–response curve), it is considered to possess estrogenic properties. In cases where a sample exhibited high estrogenicity, samples were diluted to capture the linear stage of the dose–response curve, ensuring more accurate measurement. The resulted EEQ concentration shows that the estrogenic activity of the sample is equivalent to the estrogenic activity of a 17ß-E2 solution with the same concentration.
Description of the UHPLC method
For implementing the UHPLC analyses, frozen samples preserved from the YES tests were transported to a professional analytical laboratory belonging to the Hungarian Academy of Sciences. The thawed samples were filtered through glass fiber membrane filter (Chromafil GF/PET-45/25 (0.45um)) to avoid suspended pollutions. 0.5–1.0 ml aliquots were inserted into the autosampler.
Concentrations of pharmaceuticals in the liquid phase were analysed via UHPLC (Shimadzu-Nexera X2 LC-30AD) using fluorescence (FLD) and PDA detectors. Sensitive analytical methods were developed for the simultaneous determination of pharmaceuticals and estradiols in prepared samples. The excitation and emission wavelengths were 280 nm and 310 nm, respectively. For separation, reverse phase column was used (Kinetex C18; particle size: 2.6 μm; length: 150 mm). The mobile phase was 57:43% mixture of ultrapure water (acidified with 10 mM H 3 PO 4 ) and acetonitrile. The flow rate was between 0.6 and 0.8 ml/min at 40 °C. The injected sample volume was 1 μl. For the limit of detection (LOD) and the limit of quantitation (LOQ), see Table 2 . The chemicals (acetonitrile and methanol) and the standards of analytical purity (E1, 17α-E2, 17ß-E2, EE2, E3, gonadorelin, cloprostenol, dinoprost-trometamin, chlorocresol, benzyl-alcohol) were purchased from Sigma-Aldrich. Ultrapure water with a quality of 0.055 μS/cm (LaboStar® PRO TWF) was used in all the analytical procedures. The stock solutions were diluted in methanol and prepared in amber-stained borosilicate beakers. Calibration was carried out by the use of pharmaceuticals as external standards. The following concentrations were used in triplicate for each pharmaceutical: 1, 10, 50, 100, 500, 700 and 1000, μg/l.
Most endrocrine disruptors have aromatic moieties that allow them to be distinguished by fluorescent detection, especially in the case of estradiols (Fig. S1 .). Therefore, methods were carefully designed to achieve a suitable resolution with the fluorescence detector for each compound (Fig. S1 , Fig. S2 .). Fluorescence detection–coupled HPLC is an ideal tool for the routine measurement of estrogens due to its sensitivity, selectivity, and cost-effectiveness.
Methods of statistical analyses of the results
We attempted to find correlations between two response variables (the estrogenic effect of the liquid and the solid phases of the slurry, averaged quarterly in μg/kg) and three groups of the background variables summed quarterly: (1) amount of injected active ingredients (mg) per active ingredient (D-Phe6-gonadorelin, chloprostenol, dinoprost-trometamin) and summed; (2) number of treatments per medicine (Ovarelin, PGF, Gonavet, Dinolytic, Alfaglandin) and summed; (3) auxiliary background variables (number of inseminations, number of calvings, number of dead calvings, number of abortions).
During the analysis of the row of data ( n = 16), first, we quantified the correlation relationship among all the variables with the Pearson correlation coefficient. Then, we performed principal component analysis (PCA). During the PCA, we used the two response variables describing the estrogenic effect to stretch out the ordination space, and we fitted the background variables describing the active ingredients in that space with a permutation method using 1000 repetitions. Analyses were performed with “R” statistical software (R CORE TEAM 2020 ) and its “vegan” package (Oksanen et al. 2020 ). | Results
Validation of the UHPLC method
Linearity, precision (repeatability), accuracy (recovery) and limits of detection (LOD) and quantification (LOQ) were evaluated to verify the performance of the method. Validation was performed according to International Conference on Harmonisation (ICH) guideline Q2 (R1). The linearity was determined from 10 analytical curves in triplicates of the standards in the range of 0.001–2 μg/ml. The results were evaluated using the regression coefficients (Table S1 , Table S2 ). To assess the precision and accuracy of the method, three concentrations (0.01–1 μg/mL) were measured at three different times of a same day and on three consecutive days. The results were expressed with relative standard deviation (RSD). The developed chromatographic method should provide separation of pharmaceuticals in the liquid phase of slurry. Peak interference was not observed during the analysis. Samples were spiked with the appropriate chemicals, especially when dissolved organic materials made any noise on the chromatogram. The analysis performed with the same equipment by a single analyst. The limits of detection (LOD) and quantification (LOQ) were determined based on the calibration curves. The standard deviation of y-intercepts of regression line was used as the standard deviation.
UHPLC tests
Using a UHPLC-FLD method, we tested 32 slurry samples for 5 steroidal estrogen compounds, 3 active substances of oestrus -inductive pharmaceuticals and 2 auxiliary ingredients. At least one compound belonging to the estrogenic group could be detected from every sample. Out of the active substances of the OIVPs (D-Phe6-gonadorelin, cloprostenol and dinoprost-trometamin) and the two auxiliary ingredients studied (chlorocresol, benzyl-alcohol), all were under the detection limit in all samples. E1 and EE2 were also under LOD. By contrast, E3 was detected in 100% of the samples, 17α-E2 was in 78%, while 17β-E2 was in 66% of them. 17β-E2 and 17α-E2 appeared regularly at above-LOD levels in the samples from the second half of 2018. Values were steadily higher in the solid fraction of the slurry: in the case of 17β-E2, concentrations were six to seven times higher than in the liquid fraction (Table 3 ).
Validation of the yeast assay (YES test)
For the yeast test, LOQ was 1 ng/l 17ß-E2, while the LOD was 27 pg/l. To assess the accuracy and the precision of the tests, we tested 17ß-E2 standards in the concentrations ranging from 2 to 500 ng/l on 96-well plates. The precision was above 84%, which suits the Industrial Guideline for the Validation of Bioanalytical Methods. Accuracy, on the other hand, did not exceed the level of 80% for the two lowermost dilutions; therefore, only concentrations between 18.6 and 500 ng/l meet the requirements (for 2 ng/l: 48.8%; 6.2 ng/l: 76%; 18.6 ng/l: 91.3%; 55.6 ng/l: 91%; 166.5 ng/l: 97.9%; 500 ng/l: 98.3%, at 95% confidence).
Slurry estrogenic effect test (YES test)
We revealed that the EEQ values of the slurry had a steadily rising tendency over time, with some seasonal fluctuations (Table 3 ). The compounds with estrogenic effects tend to bind more strongly to the solid phase than to the liquid part. The shift in the protocol at the farm in the 2nd quarter of 2018 doubled the number of hormonal injections administered. As a consequence, EEQ values referring to the liquid phase below 100 μg/l can only be found before that date (e.g. 4 th quarter of 2017); later on, only 2–3 times higher values can be observed. The solid phase, on the other hand, showed significantly higher EEQ values, greater by orders of magnitude than those of unpolluted samples. Values equal to or below 2000 μg/l can only be detected before the date of the protocol change. Later on, except for one piece of data, we measured up to 2–3 times higher EEQ values. Values measured in the 3rd and 4th quarters of the years of the research are typically higher than those in the 1st and 2nd quarters.
Medicine active ingredient test (YES test)
We studied 5 oestrus -inducer hormonal products (Ovarelin, PGF, Gonavet, Dinolytic, Alfaglandin) by measuring 6 different volumes (20 μl, 10 μl, 5 μl, 1 μl, 0.5 μl, 0.1 μl) from each. We considered the estrogenic effect as the average of the responses gained for the six measurements. The lowest volume (0.1 μl) of either of the OIVPs did not show results during the yeast test. There are three active substances (D-Phe6-gonadorelin, chloprostenol, dinoprost-trometamin) in the 5 OIVPs, formulated with either of the two auxiliary ingredients (originally functioned as preservatives, i.e. benzyl-alcohol and chlorocresol). It is observable from Table 4 that even though neither of these hormones belong to the group of endoestrogens, all 5 showed estrogenic effect in the yeast test. It indicates that owing to their chemical structure, they are able to bind to the human estrogen receptor (hERα).
The veterinary medicine Alfaglandin contained 0.250 mg/ml chloprostenol as an active ingredient, while PGF only contained 0.092 mg/ml. We pointed out, however, that PGF showed higher estrogenic effect than Alfaglandin (83.23 μg/l vs. 73.6 μg/l, respectively). At a volume of 0.5 μl (0.5 μl/120 μl dilution in the 96-well plate), only Alfaglandin showed positive results, indicating that it keeps its estrogenic effect even at higher dilutions (from the point of view of environmental pollution, persistency is a negative characteristic). The estrogenic effect of D-Phe6-gonadorelin: The estrogenic effect of Gonavet was 5 times higher than that of Ovarelin (53.87 μg/l vs. 9.72 μg/l, respectively), even though both OIVPs contained the same D-Phe6-gonadorelin as an active ingredient and in the same concentrations (0.050 mg/ml). They only differed in the auxiliary ingredients: chlorocresol (1 mg/ml) and benzyl-alcohol (15 mg/ml), respectively. For testing the estrogenic effect of dinoprost-trometamin, only one veterinary product was available at the farm, Dinolytic, which contained 5 mg/ml active ingredient and showed 12.61 μg/l estrogenic effect. The results are similar to those of Ovarelin, which had the same auxiliary ingredient, a relatively high concentration of benzyl-alcohol (16.5 mg/ml).
Analysis of reproduction biology
We evaluated the most important parameters of the reproduction biology of the farm during this research period which show the performance of the farm from a managerial perspective. The number of hormonal treatments was rising steadily at the farm from 2017, as it can be seen from the numbers of inseminations and calvings (Fig. 1 ). In the second half of 2018, there was a shift in the insemination protocol (OvSynch to Double OvSynch); therefore, the number of hormonal treatments doubled.
To provide a stable milk supply throughout the year and more independence from natural factors influencing milk production (and to generate more profit), the OIVPs studied were used in large amounts at the farm. The average number of treatments per dairy cow was 0.68 in 2017, 1.02 in 2018, 1.73 in 2019 and 2.2 in 2020. We revealed that not only did the number of hormonal treatments rise continuously from 2017, but also the estrogenic content (EEQ value) of the slurry. The rise of EEQ value in the resulting slurry between the starting (2017) and the closing year (2020) of our research was almost fourfold.
Statistical analyses
The two response variables, namely the quarterly averaged estrogenic effects of the liquid and the solid parts of the slurry, are strongly correlated ( r = 0.86, n = 16, p < 0.001). The estrogenic effect of the liquid phase shows a stronger correlation with the background variables (i.e. with the active substances, with the formulated OIVPs and with the auxiliary background variables) than that of the solid phase (Fig. 2 ).
According to the Pearson correlation, the estrogenic effect of the liquid phase has a strong positive and significant correlation ( p < 0.001) with dinoprost ( r = 0.86), chloprostenol ( r = 0.83) and gonadorelin ( r = 0.75). Their relationship with the estrogenic effect of the solid phase is slightly weaker and less significant ( p < 0.01); their correlation coefficients are 0.81, 0.72 and 0.69, respectively.
The quarterly summed number of medical treatments shows a strong correlation with the estrogenic effect of the liquid ( r = 0.89, p < 0.001) and the solid phases ( r = 0.81, p < 0.001). Though slightly weaker, there is still a strong correlation between the estrogenic effect of the liquid phase and Dinolytic ( r = 0.86, p < 0.001), Alfaglandin ( r = 0.79, p < 0.001) and Gonavet ( r = 0.75, p < 0.001), while PGF is non-significant and neutral ( r = 0.08, p = 0.76) and Ovarelin shows a negative correlation ( r = − 0.61, p < 0.05). The correlations of the medical treatments with the estrogenic effect of the solid phase can be listed in the same order: Dinolytic: r = 0.81, p < 0.001; Alfaglandin: r = 0.68, p < 0.01; Gonavet: r = 0.67, p < 0.01; PGF: r = 0.10, p < 0.70; and Ovarelin: r = − 0.55, p < 0.05.
From the auxiliary background variables, insemination showed slightly positive ( r = 0.64), dead calving and abortion showed slightly negative ( r = − 0.43 and r = − 0.37, respectively), while calving showed neutral ( r = 0.04) correlation with the estrogenic effect of the liquid phase of the slurry, out of which only the correlation of the insemination is significant at 0.005 level of significance. The correlation of the auxiliary variables with the solid phase gives similar results; the only correlation that we found with marginal significance ( p = 0.05) was in the case of insemination ( r = 0.47).
Figure 2 shows the results of the PCA for the active substances. The ordination could comprise the major part, 93% of the variance into one principal component. The two response variables, i.e. the estrogenic effects of the liquid and the solid phases of the slurry, correlated with each other strongly, and just slightly turned away from this principle component. All active substances point to the direction of the estrogenic effect of the liquid phase. Chloprostenol most correlated to the liquid phase out all of them. The least specific seemed to be dinoprost and the total amount of active substances.
We have also composed a complex figure comprising the chemical analyses of the results of several tests in box plots (Fig. 3 ). Horizontally: the upper row of boxes (a, b, c, d) represents results based on the solid phase of the slurry; boxes a, b and c show UHPLC and d shows the YES test results. The second row of boxes (e, f, g, h) comes from testing the liquid phase, while the lower row of boxes (i, j, k, l) shows the comparisons of the data measured in all the liquid and solid phase samples. Vertically (columns of boxes): boxes a, e and i, show UHPLC results for 17α-E2, boxes b, f and j show UHPLC results for 17ß-E2, while boxes c, g and k show UHPLC results for E3. The fourth column (d, h, l) shows the YES test results for the solid and the liquid phases, as well as the comparisons between the data measured before and after the protocol change. Tendencies clearly show that the protocol change raised the estrogenic content of the slurry and estrogenic compounds tend to bind more strongly to the solid phase. | Discussion
The present field-based, longitudinal study investigates the fate of 5 different oestrus -inducer drugs from the use phase to their appearance in the slurry at a dairy cow farm in Europe, Hungary. We examined the use of these OIVPs from 2017 to 2020 and determined the estrogenic effects of the generated higher hormonal excretion of the cows appearing in the slurry as well.
It is a biological destiny that milk production starts only after calving. Therefore, manipulation of the reproduction biology of the cows targets the maximisation of the yearly milk production per dairy cow, distributed among the livestock throughout the year.
Our research just partially supports what Jobling et al. ( 2009 ) claimed in that results of the chemical analyses and those of the yeast tests are not comparable and estrogenic activities (EEQ values) of the samples do not correlate well with the measured analytical concentrations of the individual steroidal estrogens. Our results are more consistent: Using UHPLC analyses, we gained exact concentrations of some chemicals, while the YES test provides holistic data on the estrogenic effect of the whole sample, for a number of molecules. On Fig. 3 , tendencies are the following: the protocol change in mid-2018 raised the concentrations of 17α-E2 and 17ß-E2 alike, both in the solid and liquid phases of the slurry, and the differences are statistically significant. E3 values were larger by one order of magnitude throughout the research period, but the visible rise is not significant. Given that E1 and EE2 results were under LOD, they are not represented on this figure. The YES test provided similar rising tendencies with significant difference in each case.
From an environmental perspective, however, we agree with Jobling et al. ( 2009 ): results of the yeast test can be considered more relevant because the hormone-like effects of all compounds as well as the interactions among the chemicals are observable. Therefore, these results have higher predictive values when it comes to measuring potentially important effects on human health (Jobling et al. 2009 ). That is the main reason why the yeast test was chosen for our research.
Even though some hormonal products contained the same active ingredients , they generated different estrogenic effects depending on their auxiliary ingredients (benzyl-alcohol vs. chlorocresol). The higher EEQ value we measured for a drug, the stronger hormonal effect it had. It is important to note that we gained these values with the co-presence of the active ingredient having a known hormonal effect and the auxiliary ingredient which was theoretically non-hormonal and served as preservative only. The co-presence of the two materials makes chemical/biochemical interactions (synergism or antagonism) between them possible.
It can be hypothesised from the data presented in Tables 1 and 3 that if a medicine with an equal or even lower hormonal level contains a synergistic auxiliary material (e.g. chlorocresol seems to be synergistic), it is able to generate the same or an even higher physiological effect (ovulation, successful insemination, higher milk production) than another drug which is even high in hormonal content but contains an auxiliary material (e.g. benzyl-alcohol) with a low or no synergistic effect. Once administered to the animal, the medicine works well, keeping its high hormonal effect, imitating high hormonal content, but the main and the auxiliary materials are metabolised in different ways, the synergistic effect ceases, and the medicine ends up as a low hormonal pollution in the manure or slurry. The pairing of Gonavet and Ovarelin suits our hypothesis: both have the same active substance at the same concentration, but their auxiliary materials differ (chlorocresol vs. benzyl-alcohol, respectively). We measured 5.54 times higher EEQ value for Gonavet, most likely because the auxiliary substance chlorocresol synergistically strengthened the estrogenic effect of the main ingredient. Our hypothesis is also supported by the fact that Alfaglandin kept its high hormonal effect even at high dilutions. In fact, at that level, only Alfagladin showed a positive estrogenic effect, due to its inherently high hormonal power plus the effect of the synergistic auxiliary ingredient. (At that dilution level, its hormonal level should have been ceased and be under LOD.) Even though only Dinolytic represented drugs with dinoprost as an active substance, indirectly it also supports our hypothesis, given that its EEQ value was low, indicating that its auxiliary ingredient, despite its very high concentration, did not perform a synergistic effect with the active substance.
The lesson learned from the above-mentioned examples, which can also be transformed into a strategic suggestion for the practice, is that it is advisable to choose a medicine which contains a synergistic auxiliary ingredient (chlorocresol) independent from the active substance. This step would reduce the ecological footprint and the risk of food contamination and human health problems of the artificial induction of the oestrus of dairy cows. In the light of our results, it is predictable that slurry can be less hazardous to the environment later on when it is applied on the field.
Limitations of the study: while the Yeast Estrogen Screen (YES) assay measures the activity of the human estrogen receptor, it is important to acknowledge that it cannot serve as a direct model for human cellular responses. This limitation arises from the inherent dissimilarities between yeast and human cells, primarily attributed to the presence of a protective cell wall in yeast. This cell wall significantly alters the manner in which external substances interact with and penetrate yeast cells, resulting in distinct cellular responses compared to human cells. In spite of its limitations, however, in the linear range of the calibration curve it does work well as a holistic method. In spite of its limitations, researchers appreciate EBM’s sensitivity (Itzel et al. 2019 ; Simon et al. 2022 ), cost-effectiveness and excellent screening capability in ecotoxicologically comprehensive water quality assessment, worth implementing EU-wide (Simon et al. 2022 ). Therefore, the need emerged among experts to incorporate EBMs (incl. the YES test) into the European Water Framework Directive.
Toxicological assessments are often restricted to immediate effects, e.g. oral acute toxicity, in a usual range of concentration (%, g/l, mg/l, etc.). Micropollutants, on the other hand, are effective at very low (ng/l) concentrations at which conventional instrumental analytical methods are not sensitive enough: LOQ values are often greater than the effective concentrations, meaning: by the time GC- and HPLC-based methods are able to provide reliable results, the chemicals studied are already able to cause adverse effect to humans or the environment.
By contrast, the YES test provides “exquisite sensitivity” to estrogens (Coldham et al. 1997 ). We believe that YES test can be a sensitive indicator of estrogenic activity of the environment to provide a warning signal ahead of time to avoid adverse effects on human health.
The studied drugs showed estrogenic effects according to the xenoestrogenic effect mechanism given that their structural formuli have some similarities to those of the compounds in the estrogenic group. Some ligands of the compounds used are able to bind to those platforms of the receptors which would make bonds with the adequate groups of the real estrogens. If we compare the binding of estrogens to their receptors to the traditional “key-lock” theory, we can realise that this case is a “fake key-lock” situation: chemically different compounds with similarities in their structures generate similar biological effects. In fact, when we measure the EEQ values of these materials, we gain a holistic result of the described biochemical process.
The Green Chemistry concept prefers the most benign and natural materials and methods available in order to prevent the formation of harmful products and wastes and to eliminate existing ones. Natural decomposing of EDCs, especially in an aquatic environment, is connected to aerobic conditions, sunlight (Kim et al. 2017 ), the presence of Fe (III) ions and dissolved organic matter (DOM) (Gu et al. 2019 ). The process can be artificially accelerated by activated carbon (Rovani et al. 2014 ), or by advanced oxidation processes (AOPs) such as with ozonation and H 2 O 2 treatment (Esplugas et al. 2007 ; Wolf et al. 2022 ). However, considering their costs, they are only worthwhile in cases of drinking and wastewater treatment. Our research proved that there is a significant difference between the EEQ value (EDC content) of the liquid and the solid phases of the slurry and SEs tend to bind more to the solid phase (this finding is similar to that of Zitnick et al. ( 2011 ) who claimed that 17ß-E2 tends to bind strongly to soils and sediments). It means that in the practice, the use of separators is feasible, and the resulting liquid phase can be applied as irrigation water fairly safely, while treating the solid phase with natural processes (exposure to sunlight and atmospheric oxygen, application of some benign additives) seems more suitable and economically viable, while still being conscious about the possible release of the valuable nitrogen content. Papers reporting studies on manure usually focus on nutrient content and utilisation only, ignoring the possibility of hormonal pollution of agricultural fields with the EDC content. Detailed studies of the elimination processes, involving the expensive 14C-labelled SE molecules (Ian et al. 2019 ), or the study of uptaking of EDCs by plants on the sites is an interesting topic for further research but is beyond the scope of the present paper. | Conclusions
The first principle of Green Chemistry warns us that preventing the formation of wastes is much better, cheaper and more sustainable than treating or cleaning an already polluted material or a site in the environment. Our study reveals that intensifying breeding practices in dairy cow farms generate the hidden risk of hormonal pollution in agricultural fields or in the environment. This, through the food chain, may cause adverse effects to the wildlife and humans as well, which can appear as reduced reproductive fitness. That can be behind the infertility problems of the growing number of couples worldwide.
Due to the potentially large number of hormonal metabolites, UHPLC-based methods alone cannot describe the potential risk of samples, but the modified YES test we worked out provide a feasible solution for testing, anywhere in the world.
We concluded that the simplest way to reduce the hormonal effect of the slurry is choosing the right medicine, in which main and auxiliary ingredients are combined to utilise synergistic effect in reducing the level of the hormonal pollution. Meanwhile, we suggest further research on OIVPs by experts of veterinary medicines.
Our research underlines that slurry is a kind of material which, before applying on the field, should be treated with new methods, such as separation and composting, due to its hormonal content, not only because of environmental pollution but also because of human health risks.
Our research proves that the ecological footprint of artificial hormonal treatments can be reduced by raising not the real but the visible or imitated hormonal effect of the injection. To find a robust, cheap and reliable testing method, the YES test, as our research establishes, proves a good option. Beyond its feasibility, its environmental footprint is very small, which is a great advantage from the point of view of the Green Chemistry concept. | Responsible Editor: Ester Heath
The main objective of the research was to study the environmental “price” of the large-scale, milk production from a rarely known perspective, from the mapping of the estrogenic footprint (the amount of oestrus- inducer hormonal products, and the generated endoestrogens) in the resulting slurry in a dairy cow farm. These micropollutants are endocrine-disrupting chemicals (EDCs) and can be dangerous to the normal reproductive functions even at ng/kg concentration. One of them, 17ß-estradiol, has a 20,000 times stronger estrogenic effect than bisphenol-A, a widely known EDC of industrial origin. While most studies on EDCs are short-term and/or laboratory based, this study is longitudinal and field-based. We sampled the slurry pool on a quarterly basis between 2017 and 2020. Our purpose was testing the estrogenic effects using a dual approach. As an effect-based, holistic method, we developed and used the YES (yeast estrogen screen) test employing the genetically modified Saccharomyces cerevisiae BJ3505 strain which contains human estrogenic receptor. For testing exact molecules, UHPLC-FLD was used. Our study points out that slurry contains a growing amount of EDCs with the risk of penetrating into the soil, crops and the food chain. Considering the Green Chemistry concept, the most benign ways to prevent of the pollution of the slurry is choosing appropriate oestrus -inducing veterinary pharmaceuticals (OIVPs) and the separation of the solid and liquid parts with adequate treatment methods. To our knowledge, this is the first paper on the adaptation of the YES test for medicine and slurry samples, extending its applicability. The adapted YES test turned out to be a sensitive, robust and reliable method for testing samples with potential estrogenic effect. Our dual approach was successful in evaluating the estrogenic effect of the slurry samples.
Graphical Abstract
Supplementary Information
The online version contains supplementary material available at 10.1007/s11356-023-31126-y.
Keywords
Open access funding provided by Széchenyi István University (SZE). | Supplementary Information
Below is the link to the electronic supplementary material. | Author contribution
E. G.: conceptualisation, formal analysis, funding acquisition, investigation, methodology, supervision, writing—original draft. J. P.: conceptualisation, formal analysis, funding acquisition, investigation, methodology, supervision, review and editing. T. M.: writing—original draft, review and editing. D. P–H.: investigation, formal analysis. L. S.: investigation, methodology, conceptualisation, formal analysis. L. S.: methodology, data interpretation. R. G.: visualisation, evaluation, writing—original draft, review and editing. P. S.: investigation, methodology, supervision, funding acquisition. T. S.: funding acquisition. L. K.: methodology, data interpretation, visualisation. Á. B-F.: methodology, data interpretation. R. K.: supervision, methodology, funding acquisition.
Funding
Open access funding provided by Széchenyi István University (SZE). This work was supported by the Ministry of Innovation and Technology under the code numbers ÚNKP-20–3-II-SZE-18, ÚNKP-21–3-II-SZE-6 and ÚNKP-22–4-I-SZE-29 New National Excellence Program.
Data availability
The raw experimental data are available from EG upon request.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
The authors grant the publisher the right to publish this paper.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:01 | Environ Sci Pollut Res Int. 2023 Nov 25; 30(60):125596-125608 | oa_package/6c/88/PMC10754748.tar.gz |
PMC10756890 | 38157081 | Introduction
Pain is a common symptom in cancer patients; some studies suggested that almost half of patients experience pain at least 3 months after completing curative treatment, and nearly a third experience moderate to severe pain [ 1 , 2 ]. Cancer pain may be caused by the cancer itself or metastases, or may be related to the treatments (e.g., surgical pain, neuropathic pain after chemotherapy) [ 3 ]. Despite its high prevalence and significant impact on patient well-being, it was reported that cancer pain is under-treated in approximately one-third of patients [ 3 ], constituting an important unmet need in clinical practice.
Clinical guidelines for managing cancer pain, including those developed by the World Health Organization (WHO) [ 4 ], National Comprehensive Cancer Network [ 5 ], American Society of Clinical Oncology [ 6 ], European Society of Medical Oncology [ 7 ], and the Japanese Society for Palliative Medicine [ 8 ], suggest that pain should be managed according to the patient’s pain intensity, and that treatment may include an opioid, such as tramadol. In particular, the WHO guidelines position opioids as drugs that should be used according to the clinical assessment and pain intensity for rapid, effective, and safe pain management from the initiation of pain management, even if not based on the conventional three-step analgesia ladder. The guidelines also state that any opioid may be selected for cancer-related pain. Patients may require stronger opioids, other analgesics, or adjuvant therapies, the choice of which will depend on their clinical condition [ 4 – 8 ].
Tramadol is a weak μ-opioid receptor agonist that also inhibits norepinephrine and serotonin reuptake, with proven efficacy for managing chronic pain. Oral administration is preferred, with a regular dosing frequency every 4 or 12 h depending on the formulation prescribed (e.g., immediate-release or extended-release formulations). However, another administration route may be required in some patients.
With a view to improving the pharmacokinetic profile of orally administered tramadol, Nippon Zoki developed a new tramadol formulation as bilayer sustained-release (SR) tablets (hereafter bilayer tablets) in which the top layer comprises 35% of the dose as an immediate-release (IR) formulation and the lower layer comprises 65% of the dose as a SR formulation administered twice-daily (Twotram ® tablets; Nippon Zoki Pharmaceutical Co., Ltd.) [ 9 , 10 ]. This is the first twice-daily tramadol formulation to be developed and marketed in Japan [ 10 ]. To date, several Phase III clinical studies have demonstrated the efficacy of these bilayer tablets for managing chronic non-cancer pain associated with knee osteoarthritis [ 11 ] and postherpetic neuralgia [ 12 ], and the long-term efficacy and safety were demonstrated in a 52-week study [ 10 ]. To expand the potential indications for the bilayer tablet, we performed a randomized controlled study to examine its effectiveness in Japanese patients with cancer pain by testing its non-inferiority versus IR tramadol capsules as an active comparator. | Methods
The study was registered on the Japan Pharmaceutical Information Center clinical trial information (JapicCTI-184143) and Japan Registry of Clinical Trials (jRCT2080224082) (date registered: October 5, 2018).
Patients
Patients were eligible for this study if they had been diagnosed with cancer, had an estimated survival of ≥ 3 months from the start of study drug administration, were currently using non-opioid analgesics (nonsteroidal anti-inflammatory drugs [NSAIDs] or acetaminophen [paracetamol]), had not previously used an opioid analgesic, and the physician deemed it necessary to start tramadol to manage cancer pain. Patients used a 100-mm visual analog scale (VAS) to assess their pain at rest on study days − 2, − 1, and 1 (where study day 1 was the day of starting treatment); only patients with a score of ≥ 25 mm averaged over the 3 days were eligible for the study. Other eligibility criteria included patients treated in an inpatient or outpatient setting, age ≥ 20 years, and adequate liver and renal functions. The major exclusion criteria are listed in the Supplementary Methods.
Study design
This was a randomized, double-blind, double-dummy, active-comparator non-inferiority study comprising three periods: screening period, treatment period, and follow-up period (Fig. 1 ). In the treatment period with a double-dummy procedure, the patients took bilayer tablets (active drug or placebo) twice daily (morning and evening) and IR capsules (placebo or active drug) four times per day (morning, noon, evening, and before bed) according to the random allocation method in a blinded manner for up to 14 days (study days 1–15). The rationale for the 14-day treatment period is described in the Supplementary Methods. The dosing of study drugs and use of rescue medication are summarized in Fig. 1 and described in detail in the Supplementary Methods. Patients were randomized centrally using a dynamic allocation method in which study site and the patient’s mean score for VAS at rest (averaged over study days − 2, − 1, and 1) before the start of study drug administration were used as the allocation factors. After the 14-day treatment period, the patients entered a 7-day follow-up period, during which they could be prescribed tramadol, NSAIDs, or acetaminophen at the discretion of the investigator/subinvestigator. Approved and prohibited therapies are summarized in the Supplementary Methods. The investigator/subinvestigator at each study site was responsible for enrolling the patients using a web-based registration system. The allocation manager was responsible for assigning the study drugs, maintaining blinding, and storing the blinding code. Blinding was maintained until the database was locked.
Endpoints
Every evening, just before administering the study drug, the patients evaluated their pain at rest and during movement over the previous 24-h period using a 100-mm VAS. This information was used to determine the primary endpoint—the change in the VAS for pain at rest from baseline (averaged over the 3 days before starting treatment) to the end of treatment (EOT; averaged over study days 12–14) or at discontinuation (averaged over the 3 days before discontinuation). A clinically relevant change in the VAS for pain was defined as a moderate or greater improvement during treatment relative to the baseline score using the chart shown in Supplementary Table 1 ; this definition was developed and utilized in prior studies in Japan [ 13 , 14 ]. The Supplementary Methods describes the secondary endpoints and safety assessments. There were no changes to the study design after the first patient had been enrolled.
Statistical analyses
In consideration of the sample size calculation (Supplementary Methods), it was planned to enroll 120 patients per group.
For this study, we defined three analysis populations. The full analysis set (FAS) comprised all patients who received at least one dose of study drug and for whom the primary endpoint (i.e., change in pain VAS at rest from baseline to the EOT or discontinuation) could be calculated for modified intention-to-treat analyses. The per-protocol set (PPS) comprised all patients in the FAS, excluding those with major protocol deviations (e.g., eligibility criteria, randomization/blinding violations, or non-compliance with study drug administration). The safety analysis set (SAF) comprised all patients who received at least one dose of the study drug.
The primary endpoint was analyzed using the FAS and verified using the PPS by analysis of covariance with treatment group as a fixed factor and the baseline VAS score as a covariate to estimate the adjusted mean change in each group and the between-group difference in adjusted mean change with 95% confidence intervals (CI). Non-inferiority was established if the upper limit of the CI for the between-group difference did not exceed the non-inferiority margin (7.5 mm). Descriptive statistics were also calculated for VAS scores at each visit. Other analyses are described in the Supplementary Methods. SAS version 9.4 (SAS Institute, Cary, NC, USA) was used for all data analyses. | Results
Patients
A total of 281 patients initially provided consent, of which 251 were randomized (126 to the bilayer tablet group and 125 to the IR capsule group) (Fig. 2 ). Of these, 105 completed the study in the bilayer tablet group and 91 in the IR capsule group (Fig. 2 ).
The baseline characteristics of patients in both groups (SAF) were similar (Table 1 ). At baseline, patients in both groups typically reported moderate–high levels of pain, with a mean VAS at rest of 47.67 mm, ranging from 25.6 to 82.7 mm. Most patients (81.7%) were treated as outpatients. The most common cancer site was the gastrointestinal tract (37.1%) followed by the bile duct/liver/pancreas (20.3%). The most common metastatic sites were bone (36.7%), liver (32.3%), and lower lymph nodes (29.1%). The main site of pain was the abdomen (43.4%) followed by the dorsal region (29.5%) and low back (23.9%). All of the patients were using concomitant drugs, including non-opioid analgesics in 98.8% and NSAIDs in 76.1%. Anti-cancer drugs were used in 68.9% of patients.
The primary endpoint could not be determined due to missing values at EOT/discontinuation for 2 patients in the bilayer tablet group and 5 patients in the IR capsule group. Therefore, the FAS comprised 244 patients (bilayer tablet group, 124; IR capsule group, 120).
Treatment adherence, which was assessed using the FAS, was high, with mean ± standard deviation (SD) medication compliance rates of 99.26% ± 2.44% in the bilayer tablet group and 99.15% ± 3.78% in the IR capsule group.
VAS for pain at rest and during movement
The adjusted mean change in VAS for pain at rest from baseline to EOT/discontinuation (FAS) was − 22.07 mm for the bilayer tablet group and − 19.08 mm for the IR capsule group, corresponding to a between-group adjusted mean difference of − 2.99 mm (95% CI − 7.96 to 1.99 mm). The upper 95% CI bound was less than the predefined non-inferiority margin of 7.5 mm, demonstrating non-inferiority of the bilayer tablets to the IR capsules (Fig. 3 A). In the supplementary analysis using the PPS, the adjusted mean difference between the two groups was − 2.98 mm (95% CI − 8.16 to 2.20 mm), which was also less than the non-inferiority margin. Figure 3 B shows the mean values for VAS for pain at rest at baseline and at EOT/discontinuation in both groups. Figure 3 C shows the corresponding data for the VAS for pain during movement. The adjusted mean change in the VAS for pain during movement was − 20.43 and − 19.06 mm in the bilayer tablet and IR capsule groups, respectively, with an adjusted mean difference of − 1.38 mm (95% CI − 6.79 to 4.03 mm). The improvements in VAS scores for pain at rest and during movement on each day showed strong similarity in both groups (Fig. 4 ).
The proportion of patients with a clinically relevant improvement in pain at rest (at EOT/discontinuation) was numerically greater in the bilayer tablet group (87/124, 70.2%) than in the IR capsule group (69/120, 57.5%). Furthermore, a slightly greater proportion of patients in the bilayer tablet group experienced a clinically relevant improvement in pain during movement (71/124, 57.3% vs 60/120, 50.0%).
Estimated total duration of pain per day
The estimated total duration of pain per day was assessed using a five-item scale on study days 2–14 of the treatment period. On study day 2, 50.0% (62/124) of patients in the bilayer tablet group and 54.6% (65/119) of patients in the IR capsule group reported that their duration of pain was < 4 h. This percentage increased slightly in both groups to 59.6% (62/104) in the bilayer tablet group and 60.4% (55/91) in the IR capsule group on study day 14 (Supplementary Table 2 ).
Sleep
The majority of patients reported that their sleep was good during the treatment period. The percentage of patients who reported that they “slept well” or “slept moderately well” ranged from 77% to 86% in the bilayer tablet group and from 77% to 87% in the IR capsule group (Supplementary Fig. 1 ). The percentage of patients who reported that they “slept well” varied from 25% to 38% in the bilayer tablet group and from 24% to 41% in the IR capsule group.
Use of rescue medications
Rescue medications (one or more doses of tramadol capsule) were used by 14.8%–22.3% of patients in the bilayer tablet group and by 10.3%–22.8% in the IR capsule group (Supplementary Fig. 2 A, B). The frequency of rescue medication use remained broadly stable throughout the treatment period. The majority of patients who used rescue medication took a single dose in each group, with percentages that ranged from 9.8% to 18.5% in the bilayer tablet group and from 6.8% to 19.6% in the IR capsule group (Supplementary Fig. 2 C, D).
Quality of life
There were no marked changes in the quality of life (QOL) scores determined using the EuroQOL 5-dimension, 5-level questionnaire (EQ-5D-5L) (Supplementary Fig. 3 ) , or in the individual domains, during the treatment period in either group.
Safety
Treatment period
During the 14-day treatment period, adverse events (AEs) were reported for 97 (77.0%) patients in the bilayer tablet group and 101 (80.8%) of patients in the IR capsule group (Table 2 ). This included severe AEs in 4.8% and 5.6% of patients, respectively, and serious AEs in 8.7% and 13.6% of patients, respectively. However, few of these AEs were thought to be related to the study drugs because most of the AEs corresponded to exacerbations of the primary or metastatic cancer. In the bilayer tablet group, one patient experienced a severe adverse drug reaction (ADR) and two patients experienced serious ADRs. No severe or serious ADRs were reported in the IR capsule group. AEs resulted in death in 4 (3.2%) patients in the bilayer tablet group and 3 (2.4%) patients in the IR capsule group, but none of these events were considered related to the study drugs. ADRs resulted in discontinuation of the study drug for 10 (7.9%) patients in the bilayer tablet group and 11 (8.8%) patients in the IR capsule group. No ADRs resulted in a reduction in the doses of the study drugs. The three most common AEs in both treatment groups were nausea, constipation, and vomiting (Table 2 ). The frequencies and types of ADRs were generally similar between the two treatment groups (Supplementary Table 3 ). ADRs that occurred in ≥ 2% of patients in the bilayer tablet group were nausea (bilayer tablet group and IR capsule group: 27.8% and 32.0%), constipation (19.8% and 16.0%), vomiting (16.7% and 16.8%), somnolence (14.3% and 9.6%), dizziness (7.1% and 4.8%), decreased appetite (6.3% and 0.8%), and malaise (2.4% and 0.8%). There were no consistent trends or notable findings regarding vital signs or 12-lead electrocardiography.
Follow-up period
During the follow-up period, AEs were reported for 43 (34.1%) patients in the bilayer tablet group and 46 (36.8%) patients in the IR capsule group, indicating no difference in safety during this period (Supplementary Table 4 ). ADRs were reported for 3 (2.4%) patients in the bilayer tablet group and 2 (1.6%) patients in the IR capsule group. One AE resulted in death in the bilayer tablet group, but the event was not considered related to the study drug. Severe and serious AEs were reported in both groups, but were not considered related to the study drugs. The most frequent AEs during the follow-up period were constipation, nausea, and vomiting in the bilayer tablet group and nausea, decreased appetite, and constipation in the IR capsule group. There were no reported cases of drug dependency based on the standardized MedDRA query “Drug abuse and dependence.” | Discussion
Our aim was to investigate the non-inferiority of a bilayer tablet formulation of tramadol, comprising IR and SR layers, versus an IR capsule formulation in terms of managing cancer pain. The two treatments achieved similar improvements in the VAS for pain at rest, satisfying the criterion for non-inferiority, which was confirmed in the PPS analysis. Additionally, the changes in VAS for pain at rest and during movement on each study day, percentages of patients who slept well or moderately well, use of rescue medication, and EQ-5D-5L QOL scores were highly comparable, indicating highly similar effects of both formulations on pain control. The improvement in pain was rapid, from within 2 days of starting administration, and showed good stability throughout the study in both groups. Overall, these findings indicate that twice-daily administration of the bilayer tablets is as effective as four-times-daily IR tramadol for managing cancer pain.
Opioids are frequently used to manage cancer pain [ 15 – 21 ], due to their effectiveness and inclusion in clinical guidelines/recommendations [ 4 – 8 ]. Furthermore, studies have shown that opioids can improve QOL by alleviating cancer-related pain [ 22 – 27 ]. Here, we have shown that two formulations of tramadol can achieve a clinically relevant improvement in cancer pain at rest and during movement, and both formulations were comparable in terms of other outcomes, including sleep quality, use of rescue medications, and QOL. Therefore, our findings provide further support for using tramadol to manage cancer pain, and that physicians could choose an administration regimen (e.g., twice-daily or four-times-daily) that might be most suitable for the individual patient.
We also investigated the safety of both study drugs in terms of AEs/ADRs during the 14-day treatment period and 7-day follow-up period. During the treatment period, AEs were reported for 77.0% and 80.8% of patients in the bilayer tablet and IR capsule groups, respectively, while ADRs were reported for 58.7% and 53.6%, respectively. These values seem reasonable when we consider the frequencies of AEs reported in the initial open-label treatment escalation periods (80.6% and 78.7% in the knee osteoarthritis and postherpetic neuralgia studies, respectively) of two previous dose-withdrawal studies using the bilayer tablet formulation [ 11 , 12 ]. We enrolled opioid-naïve patients, which may increase the risk of opioid-related AEs and ADRs. Additionally, all of the patients were using concomitant drugs, such as non-opioid analgesics, two-thirds were receiving anti-cancer therapies, and nearly half were using a corticosteroid. Thus, the frequencies of AEs/ADRs are within expected ranges. The most common types of AEs and ADRs were nausea, constipation, vomiting, and somnolence, which are known to be associated with tramadol [ 5 , 6 , 28 ]. Nevertheless, there were few moderate or severe ADRs that were likely to interfere with daily activities, and only two serious ADRs and one severe ADR were reported. Overall, physicians should take appropriate care when prescribing tramadol while monitoring its safety, especially in opioid-naïve patients.
Clinical guidelines position opioids, including tramadol, as options for managing cancer pain [ 4 – 8 ]. If acetaminophen or NSAIDs do not provide sufficient pain control, it may be possible to switch to these bilayer tramadol tablets, which have already shown good long-term efficacy and tolerability in patients with chronic non-cancer pain [ 10 – 12 ]. These bilayer tablets could be started early in the patient’s clinical course and stepped down when no longer required, in accordance with WHO recommendations for the initiation, maintenance, and cessation of opioids [ 4 ].
Several preparations of tramadol, including IR and SR formulations, have been developed and are used to manage cancer pain. However, there are some potential disadvantages of the available formulations related to their pharmacokinetic properties. In particular, the pharmacokinetics of once-daily SR formulations may not be sufficient to maintain effective pain relief over the 24-h period between each dose [ 29 ]. As such, patients may require frequent use of rescue medications to maintain adequate pain relief. By comparison, the pharmacokinetics of IR formulations may provide adequate efficacy, but the frequent administration (four-times-daily) may pose a pill burden, which was associated with decreased treatment satisfaction and reduced medication adherence in other settings [ 30 – 34 ]. Further, studies in other settings suggested that patients were less adherent to a four-times-daily regimen than a twice-daily regimen [ 35 – 37 ]. Thus, patients may show better adherence to a twice-daily regimen, especially one that provides a rapid onset of action through the IR component and prolonged action through the SR component. Accordingly, we hypothesize that these bilayer tramadol tablets could offer greater compliance and at least comparable effectiveness to alternative tramadol regimens requiring more frequent administration.
Limitations
There are some limitations of this study that warrant mention. In particular, the treatment period was relatively short (14 days), which prevented us from assessing the longer-term effectiveness of tramadol. This period was selected based on an earlier study in Japan of the same length [ 14 ] and in consideration of the potential impact of anti-cancer therapy in longer-term studies. We should also consider the possibility that safety assessments may have been influenced by the use of concomitant drugs, including anti-cancer therapies, that might have inflated the frequency of AEs in this study. However, this risk seems low because the types of AEs were generally consistent with the known safety profile of tramadol. Because the study lacked a placebo group, we cannot exclude the possibility of a placebo or trial effect. However, this was deemed unethical because the patients all reported clinically significant pain despite treatment with non-opioid analgesics that would have necessitated other treatments or high rates of rescue medication. Furthermore, a placebo group was deemed unnecessary because both study drugs had already been evaluated in placebo-controlled trials of other indications [ 11 , 12 , 38 , 39 ]. We did not use a cross-over design, which may have been useful to evaluate whether patients had preferences regarding formulation and administration frequency.
Conclusions
Twice-daily administration of bilayer tramadol tablets comprising 35% immediate-release and 65% sustained-release tramadol was as effective as four-times-daily IR capsules regarding the improvement in the VAS for pain at rest. We also observed strong similarity in the other effectiveness outcomes, including the improvements in VAS for pain at rest and during movement on each study day, sleep quality, use of rescue medications, and EQ-5D-5L QOL scores. Furthermore, the safety profiles of both study groups were consistent with the known safety profile for tramadol. Overall, these findings indicate that bilayer tramadol tablets are an effective and tolerable treatment option for managing cancer pain, comparable to four-times-daily administration of IR capsules. | Conclusions
Twice-daily administration of bilayer tramadol tablets comprising 35% immediate-release and 65% sustained-release tramadol was as effective as four-times-daily IR capsules regarding the improvement in the VAS for pain at rest. We also observed strong similarity in the other effectiveness outcomes, including the improvements in VAS for pain at rest and during movement on each study day, sleep quality, use of rescue medications, and EQ-5D-5L QOL scores. Furthermore, the safety profiles of both study groups were consistent with the known safety profile for tramadol. Overall, these findings indicate that bilayer tramadol tablets are an effective and tolerable treatment option for managing cancer pain, comparable to four-times-daily administration of IR capsules. | Purpose
We investigated whether twice-daily administration of a bilayer tablet formulation of tramadol (35% immediate-release [IR] and 65% sustained-release) is as effective as four-times-daily IR tramadol capsules for managing cancer pain.
Methods
This randomized, double-blind, double-dummy, active-comparator, non-inferiority study enrolled opioid-naïve patients using non-steroidal anti-inflammatory drugs or acetaminophen (paracetamol) to manage cancer pain and self-reported pain (mean value over 3 days ≥ 25 mm on a 100-mm visual analog scale [VAS]). Patients were randomized to either bilayer tablets or IR capsules for 14 days. The starting dose was 100 mg/day and could be escalated to 300 mg/day. The primary endpoint was the change in VAS (averaged over 3 days) for pain at rest from baseline to end of treatment/discontinuation.
Results
Overall, 251 patients were randomized. The baseline mean VAS at rest was 47.67 mm (range: 25.6–82.7 mm). In the full analysis set, the adjusted mean change in VAS was − 22.07 and − 19.08 mm in the bilayer tablet (n = 124) and IR capsule (n = 120) groups, respectively. The adjusted mean difference was − 2.99 mm (95% confidence interval [CI] − 7.96 to 1.99 mm). The upper 95% CI was less than the predefined non-inferiority margin of 7.5 mm. Other efficacy outcomes were similar in both groups. Adverse events were reported for 97/126 (77.0%) and 101/125 (80.8%) patients in the bilayer tablet and IR capsule groups, respectively.
Conclusion
Twice-daily administration of bilayer tramadol tablets was as effective as four-times-daily administration of IR capsules regarding the improvement in pain VAS, with comparable safety outcomes.
Clinical trial registration
JapicCTI-184143/jRCT2080224082 (October 5, 2018).
Supplementary Information
The online version contains supplementary material available at 10.1007/s00520-023-08242-z.
Keywords | Supplementary Information
Below is the link to the electronic supplementary material. | Acknowledgements
The authors express their gratitude to the patients, investigators, and research staff who were involved in this study. The study sponsor acknowledges Nippon Shinyaku Co., Ltd. for manufacturing and supplying the IR capsules. The authors thank Nicholas D. Smith (EMC K.K.) for medical writing support, which was funded by Nippon Zoki Pharmaceutical Co., Ltd.
Study investigators
The following investigators agreed to be mentioned: Hiroki Shomura (Japan Community Health Care Organization Hokkaido Hospital, Hokkaido), Yasunori Nishida (Keiyukai Sapporo Hospital, Hokkaido), Yasushi Tsuji (Tonan Hospital, Hokkaido), Osamu Sasaki (Miyagi Cancer Center, Miyagi), Naoya Sodeyama (Sendai City Hospital, Miyagi), Yasuhiro Sakamoto (Osaki Citizen Hospital, Miyagi), Yasuhiro Yanagita (Gunma Prefectural Cancer Center, Gunma), Hiroshi Kojima (Ibaraki Prefectural Central Hospital, Ibaraki), Naoto Miyanaga (Mito Saiseikai General Hospital, Ibaraki), Masahiro Kamiga (Hitachi, Ltd., Hitachinaka General Hospital, Ibaraki), Masaharu Shinkai (Tokyo Shinagawa Hospital, Tokyo), Hitoshi Arioka (Yokohama Rosai Hospital, Kanagawa), Kazuhiro Seike (Odawara Municipal Hospital, Kanagawa), Kazuhiro Sato (Nagaoka Red Cross Hospital, Niigata), Koichi Nishi (Ishikawa Prefectural Central Hospital, Ishikawa), Kazuhisa Yoshimoto (Fuji City General Hospital, Shizuoka), Kazutoshi Asano (Shizuoka Saiseikai General Hospital, Shizuoka), Keiji Aizu (Kasugai Municipal Hospital, Aichi), Hibiki Kanda (Omi Medical Center, Shiga), Yukito Adachi (Saiseikai Noe Hospital, Osaka), Hiroyuki Narahara (Hyogo Prefectural Nishinomiya Hospital, Hyogo), Keisuke Tomii (Kobe City Medical Center General Hospital, Hyogo), Tomoe Fukunaga (Japanese Red Cross Society Himeji Hospital, Hyogo), Nobukazu Fujimoto (JOHAS Okayama Rosai Hospital, Okayama), Shoichi Kuyama (National Hospital Organization Iwakuni Clinical Center, Yamaguchi), Hidenori Harada (Yamaguchi University Hospital, Yamaguchi), Ryo Katsuki (National Hospital Organization Ureshino Medical Center, Saga), Minoru Yoshida and Shima Uneda (Japanese Red Cross Kumamoto Hospital, Kumamoto), Kodai Kawamura (Saiseikai Kumamoto Hospital, Kumamoto), and Daisuke Himeji (Miyazaki Prefectural Miyazaki Hospital, Miyazaki).
Author contributions
Conceptualization: Noriyuki Katsumata, Shinichi Kawai, Hideshi Nakano, Hideaki Ohtani, Kazutaka Sasaki, Takeshi Adachi.
Data curation: Kazutaka Sasaki, Takeshi Adachi.
Formal analysis: Takeshi Adachi.
Investigation: Masaharu Shinkai, Shoichi Kuyama, Osamu Sasaki, Yasuhiro Yanagita, Minoru Yoshida, Shima Uneda, Yasushi Tsuji, Hidenori Harada, Yasunori Nishida, Yasuhiro Sakamoto, Daisuke Himeji, Hitoshi Arioka, Kazuhiro Sato, Ryo Katsuki, Hiroki Shomura.
Methodology: Noriyuki Katsumata, Shinichi Kawai, Hideshi Nakano, Hideaki Ohtani, Kazutaka Sasaki, Takeshi Adachi.
Project administration: Masaharu Shinkai, Hideaki Ohtani.
Resources: Masaharu Shinkai, Shoichi Kuyama, Osamu Sasaki, Yasuhiro Yanagita, Minoru Yoshida, Shima Uneda, Yasushi Tsuji, Hidenori Harada, Yasunori Nishida, Yasuhiro Sakamoto, Daisuke Himeji, Hitoshi Arioka, Kazuhiro Sato, Ryo Katsuki, Hiroki Shomura.
Supervision: Masaharu Shinkai, Noriyuki Katsumata, Shinichi Kawai, Hideshi Nakano, Hideaki Ohtani.
Visualization: Masaharu Shinkai, Hideaki Ohtani.
Writing – original draft: Masaharu Shinkai, Noriyuki Katsumata, Shinichi Kawai, Hideshi Nakano, Hideaki Ohtani, Kazutaka Sasaki, Takeshi Adachi.
Writing – review and editing: all authors.
Funding
This study was funded by Nippon Zoki Pharmaceutical Co., Ltd.
Data availability
The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
Declarations
Competing interests
Masaharu Shinkai, Shoichi Kuyama, Osamu Sasaki, Yasuhiro Yanagita, Minoru Yoshida, Shima Uneda, Yasushi Tsuji, Hidenori Harada, Yasunori Nishida, Yasuhiro Sakamoto, Daisuke Himeji, Hitoshi Arioka, Kazuhiro Sato, Ryo Katsuki, and Hiroki Shomura received financial support (personal or to their institution) for this study under a clinical trial contract with Nippon Zoki. Masaharu Shinkai received writing fees from Nippon Zoki in relation to this manuscript. Noriyuki Katsumata received honoraria from Nippon Zoki as a medical expert for this study. Shinichi Kawai received research grants from Nippon Zoki and was involved in this study as a medical expert. Hideshi Nakano, Hideaki Ohtani, Kazutaka Sasaki, and Takeshi Adachi are employees of Nippon Zoki.
Ethics approval
This study was conducted in compliance with the Declaration of Helsinki, Good Clinical Practice, and Japanese ethical guidelines. The protocol was approved by the Institutional Review Board/Ethics Committee at all 49 institutions at which the study was conducted, including the Institutional Review Board of Tokyo Shinagawa Hospital, Medical Corporation Association, Tokyokyojuno-kai.
Consent to participate
All patients provided written informed consent to participate. | CC BY | no | 2024-01-16 23:35:01 | Support Care Cancer. 2024 Dec 29; 32(1):69 | oa_package/9e/d0/PMC10756890.tar.gz |
PMC10759284 | 0 | Editor-in-Chief
Editor-in-Chief
Editor-in-Chief
Editor-in-Chief
Editor-in-Chief
Editor-in-Chief
Editor-in-Chief
Editor-in-Chief
Editor-in-Chief
Editor-in-Chief
Editor-in-Chief | Over 200 health journals call on the United Nations, political leaders, and health professionals to recognise that climate change and biodiversity loss are one indivisible crisis and must be tackled together to preserve health and avoid catastrophe. This overall environmental crisis is now so severe as to be a global health emergency.
The world is currently responding to the climate crisis and the nature crisis as if they were separate challenges. This is a dangerous mistake. The 28 th Conference of the Parties (COP) on climate change is about to be held in Dubai while the 16 th COP on biodiversity is due to be held in Turkey in 2024. The research communities that provide the evidence for the two COPs are unfortunately largely separate, but they were brought together for a workshop in 2020 when they concluded that: “Only by considering climate and biodiversity as parts of the same complex problem...can solutions be developed that avoid maladaptation and maximize the beneficial outcomes.” 1
As the health world has recognised with the development of the concept of planetary health, the natural world is made up of one overall interdependent system. Damage to one subsystem can create feedback that damages another—for example, drought, wildfires, floods and the other effects of rising global temperatures destroy plant life, and lead to soil erosion and so inhibit carbon storage, which means more global warming. 2 Climate change is set to overtake deforestation and other land-use change as the primary driver of nature loss. 3
Nature has a remarkable power to restore. For example, deforested land can revert to forest through natural regeneration, and marine phytoplankton, which act as natural carbon stores, turn over one billion tonnes of photosynthesising biomass every eight days. 4 Indigenous land and sea management has a particularly important role to play in regeneration and continuing care. 5
Restoring one subsystem can help another—for example, replenishing soil could help remove greenhouse gases from the atmosphere on a vast scale. 6 But actions that may benefit one subsystem can harm another—for example, planting forests with one type of tree can remove carbon dioxide from the air but can damage the biodiversity that is fundamental to healthy ecosystems. 7
The impacts on health
Human health is damaged directly by both the climate crisis, as the journals have described in previous editorials, 8 , 9 and by the nature crisis. 10 This indivisible planetary crisis will have major effects on health as a result of the disruption of social and economic systems—shortages of land, shelter, food, and water, exacerbating poverty, which in turn will lead to mass migration and conflict. Rising temperatures, extreme weather events, air pollution, and the spread of infectious diseases are some of the major health threats exacerbated by climate change. 11 “Without nature, we have nothing,” was UN Secretary-General António Guterres's blunt summary at the biodiversity COP in Montreal last year. 12 Even if we could keep global warming below an increase of 1.5°C over pre-industrial levels, we could still cause catastrophic harm to health by destroying nature.
Access to clean water is fundamental to human health, and yet pollution has damaged water quality, causing a rise in water-borne diseases. 13 Contamination of water on land can also have far-reaching effects on distant ecosystems when that water runs off into the ocean. 14 Good nutrition is underpinned by diversity in the variety of foods, but there has been a striking loss of genetic diversity in the food system. Globally, about a fifth of people rely on wild species for food and their livelihoods. 15 Declines in wildlife are a major challenge for these populations, particularly in low- and middle-income countries. Fish provide more than half of dietary protein in many African, South Asian and small island nations, but ocean acidification has reduced the quality and quantity of seafood. 16
Changes in land use have forced tens of thousands of species into closer contact, increasing the exchange of pathogens and the emergence of new diseases and pandemics. 17 People losing contact with the natural environment and the declining loss in biodiversity have both been linked to increases in noncommunicable, autoimmune, and inflammatory diseases and metabolic, allergic and neuropsychiatric disorders. 10 , 18 For Indigenous people, caring for and connecting with nature is especially important for their health. 19 Nature has also been an important source of medicines, and thus reduced diversity also constrains the discovery of new medicines.
Communities are healthier if they have access to high-quality green spaces that help filter air pollution, reduce air and ground temperatures, and provide opportunities for physical activity. 20 Connection with nature reduces stress, loneliness and depression while promoting social interaction. 21 These benefits are threatened by the continuing rise in urbanisation. 22
Finally, the health impacts of climate change and biodiversity loss will be experienced unequally between and within countries, with the most vulnerable communities often bearing the highest burden. 10 Linked to this, inequality is also arguably fuelling these environmental crises. Environmental challenges and social/health inequities are challenges that share drivers and there are potential co-benefits of addressing them. 10
A global health emergency
In December 2022 the biodiversity COP agreed on the effective conservation and management of at least 30 percent of the world's land, coastal areas, and oceans by 2030. 23 Industrialised countries agreed to mobilise $30 billion per year to support developing nations to do so. 23 These agreements echo promises made at climate COPs.
Yet many commitments made at COPs have not been met. This has allowed ecosystems to be pushed further to the brink, greatly increasing the risk of arriving at ‘tipping points’, abrupt breakdowns in the functioning of nature. 2 , 24 If these events were to occur, the impacts on health would be globally catastrophic.
This risk, combined with the severe impacts on health already occurring, means that the World Health Organization should declare the indivisible climate and nature crisis as a global health emergency. The three pre-conditions for WHO to declare a situation to be a Public Health Emergency of International Concern 25 are that it: 1) is serious, sudden, unusual or unexpected; 2) carries implications for public health beyond the affected State's national border; and 3) may require immediate international action. Climate change would appear to fulfil all of those conditions. While the accelerating climate change and loss of biodiversity are not sudden or unexpected, they are certainly serious and unusual. Hence we call for WHO to make this declaration before or at the Seventy-seventh World Health Assembly in May 2024.
Tackling this emergency requires the COP processes to be harmonised. As a first step, the respective conventions must push for better integration of national climate plans with biodiversity equivalents. 3 As the 2020 workshop that brought climate and nature scientists together concluded, “Critical leverage points include exploring alternative visions of good quality of life, rethinking consumption and waste, shifting values related to the human-nature relationship, reducing inequalities, and promoting education and learning.” 1 All of these would benefit health.
Health professionals must be powerful advocates for both restoring biodiversity and tackling climate change for the good of health. Political leaders must recognise both the severe threats to health from the planetary crisis as well as the benefits that can flow to health from tackling the crisis. 26 But first, we must recognise this crisis for what it is: a global health emergency. | CC BY | no | 2024-01-16 23:35:07 | Int Health. 2023 Oct 25; 16(1):1-3 | oa_package/d9/fc/PMC10759284.tar.gz |
||||||
PMC10761990 | 38168108 | Introduction
Organelles in eukaryotic cells are considered the results of early bacterial endosymbiotic events 1 . Chloroplasts, the apparatus for photosynthesis, retain a genome of about 100–200 kb in size that comprises two inverted repeats (IR) separated by a small single-copy (SSC) region and a large single-copy (LSC) region 2 . Chloroplasts utilize nucleus-encoded proteins to conduct replication synchronously. Replication normally initiates from the origin, where DNA helicase separates DNA double helix and creates the replication fork utilizing energy from ATP hydrolysis 3 . DNA primase recognizes single-stranded DNA (ssDNA) to synthesize RNA oligonucleotides used as primers to initiate DNA replication. DNA polymerase utilizes the RNA primers and adds nucleotides matched to the template strand in the 5′ to 3′ direction 4 . While the leading strand is replicated continuously to the 3′ end of the complementary strand, the lagging strand in the opposite direction can only synthesize Okazaki fragments discontinuously as the replication fork moves forward, which processes require the synthesis of RNA primers by primase 5 .
The replisome of the chloroplast genome shares an evolutionary origin with that in bacteriophage T7 6 – 8 , which consists of 4 proteins: the coupled primase–helicase (gp4), the single-stranded DNA binding protein (gp2.5), the DNA polymerase (gp5) and its processivity factor E. coli thioredoxin (trx) 9 , 10 . T7 gp4 protein is central to the replication machinery, it is composed of a zinc-binding domain (ZBD) that recognizes ssDNA template, an RNA polymerization domain (RPD) that adds ribonucleotide, and a helicase domain that unwinds double-stranded DNA 11 . Most eukaryotes own a homolog of the T7 gp4 protein named Twinkle (T7 gp4-like protein with intramitochondrial nucleoid localization) 12 . In metazoan organisms, Twinkle lacks the cysteines critical to zinc coordination in the ZBD domain and residues required for RNA synthesis in the RPD domain 12 . Instead, RNA polymerase synthesis transcripts are used as primers for DNA replication in metazoan mitochondria 13 .
The Arabidopsis nucleus-encoded Twinkle homolog (Arabidopsis Twinkle homolog, ATH) is a 709-residue protein that localizes in chloroplasts and mitochondria 6 . Previous studies demonstrated that ATH has both primase and helicase activities in vitro 6 . ATH synthesizes RNA primers from a 5′-(G/C)G GA -3′ template sequence, and RNA oligonucleotides synthesized by ATH can be efficiently used as primers for plant organellar DNA polymerases Pol1A and Pol1B 14 . ZBD is typically associated with DNA binding domain 15 . In bacteriophage primase, two CXXC elements are essential to metal coordination in the ZBD domain 16 . In plant primase, the first CXXC repeat is conserved, but the second CXXC repeat is substituted by a CXRXKC element 14 . Previous studies indicate that the H33 and K70 residues located before the second CXXC repeat in T7, and Clostridium difficile primase drive ssDNA template recognition, respectively 16 , 17 , and computer modeling of the ZBD domain in ATH indicated that the side chains of R166, K168, and W162 are in similar orientations than H33 of T7 primase and K70 of C. difficile primase 14 suggesting that the R166, K168, and W162 residues may be pivotal in ssDNA template recognition.
The machinery of DNA replication and RNA transcription operate on the same genome template, and collisions occur when they approach. The orientation of gene transcription relative to the direction of the replication fork determines the pattern of transcription-replication conflicts (TRCs). Studies in Bacillus subtilis and human cells revealed that head-on transcription-replication conflicts (HO-TRCs) induce the formation of stable R-loops and block fork progression, representing a major source of genomic instability 18 – 20 . Similar phenomena also happen in plants. Our previous studies revealed that the chloroplast-localized RNase H1 protein AtRNH1C can form a complex with chloroplast-localized DNA Gyrases (AtGyrases) and resolve HO-TRCs and R-loops in the rDNA HO-TRCs regions, thus maintaining genome integrity. Mutation of AtRNH1C leads to the formation of aberrant R-loops at these regions, causing genome breaks in chloroplast and growth defects 20 , 21 . By a reverse genetic screening, we also identified a DNA:RNA helicase, RHON1, as an R-loop resolvase operating in parallel with AtRNH1C to restrict HO-TRC-triggered R-loops and maintain genome integrity in chloroplasts, and the HO-TRC-trigged R-loops can be restricted by controlling the transcriptional activity of plastid-encoded RNA polymerases 20 , 21 .
To uncover mechanisms how organisms could coordinate the transcription and replication with R-loop formation and genome maintenance, we adopted a forward genetic screening strategy to identify suppressors of atrnh1c . Here, we report that the primase of chloroplast genome replication, ATH, is responsible for enhancing HO-TRCs thus leading to R-loop accumulation and genome instability in atrnh1c . A point mutation in the zinc-binding domain (ZBD) weakens the binding of template DNA, and decreases the abilities of primer synthesis and delivery, thus slowing down replication and relieving transcription-replication competition. Over-expression of ATH leads to aberrant R-loop accumulation, which can be attenuated by the over-expression of AtRNH1C synchronously. Strand-specific DNA damage sequencing revealed that transcription-replication competition could introduce single-strand DNA breaks near the end of the transcription units. Furthermore, mutation of DNA polymerase Pol1A also can rescue the defects in atrnh1c by a similar mechanism. As HO-TRCs are commonly present in the genomes of all species, our results demonstrated a likely general mechanism of relaxing strand-specific transcription-replication competition to maintain genome integrity. | Methods
Plant growth and materials
All Arabidopsis thaliana materials used in this study are in the ecotype Columbia-0 (Col) background. The T-DNA insertion mutants atrnh1c , rhon1 , and SALK_152246 were obtained from Nottingham Arabidopsis Stock Centre, UK. Surface-sterilized seeds were sown on 1/2 MS medium and incubated at 4 ̊C for 2 days for stratification. The plants were grown in the chamber under long-day conditions (day/night cycle of 16/8 h) at 22 °C in white light and 18 °C in the dark as described 20 . All plant materials are used from 21-day-old seedling leaves that grew on 1/2 MS medium, unless otherwise specifically indicated.
For the complementation experiments, ATH or ATH(R166K) , genomic DNA sequence from 1.5-kb upstream of ATG to 500-bp downstream of the stop codon were amplified and cloned into the binary vector pCambia1300, thus generating the acs1 ATH::ATH and acs1 ATH::ATH(R166K) vectors. Then the GFP or GUS tags were amplified and fused to the C-termini of ATH to generate the acs1 ATH::ATH-GFP/GUS and acs1 ATH::ATH(R166K) -GFP vectors. The vectors were constructed using the Fast-Cloning method, and the primers are listed in Supplementary Data 1 . Then the construct was transformed into acs1 plants, and transformants were selected using hygromycin antibiotics.
To generate ATH overexpression transgenic plants, the coding sequence of the ATH gene without the stop codon was cloned into the binary vector pEarleyGate202 and fused with GFP or FLAG tags at the C-termini. Then the construct was transformed into Col-0 and acs1 plants, and transformants were selected using hygromycin antibiotics. AtRNH1C overexpression transgenic plants were generated by cloning the coding sequence of AtRNH1C into the binary vector pEarleyGate202 without the stop codon, fused with the HA tag at the C-terminus. Then the construct was transformed into Col-0 and atrnh1c plants, and transformants were selected using kanamycin antibiotics.
To generate the genomic mutation of the ATH and Pol1A gene, the plant CRISPR/Cas9 system was used as previously described 20 , 24 . The sequences of sgRNAs are listed in Supplementary Data 1 .
All these vectors were transferred into Agrobacterium tumefaciens strain GV3101 and then transformed into Arabidopsis plants by the floral dip method. Transgenic lines were identified through selection using hygromycin or kanamycin antibiotics and verified by PCR and immunoblotting.
Whole-genome sequencing-based mapping
The atrnh1c suppressor line acs1 was backcrossed to atrnh1c , and the F2 population was grown on MS plates for 14-days. Two hundred seedlings with the green leaf phenotype were collected, and genomic DNA was extracted. The genomic DNA was then submitted to the DNA-sequencing facility for library preparation and sequencing on a NovaSeq 6000 system (Illumina) to generate 100-bp paired-end reads, yielding >20-fold genome coverage. The reads were mapped to a Col-0 reference genome (TAIR10), and the putative single-nucleotide polymorphisms (SNPs) were used as markers to identify regions with atrnh1c across the genome using SHOREmap software 22 . Only C/G-to-T/A transition (EMS-induced) SNP markers were further considered candidates. The causative mutation within the mapping interval was annotated using the SHOREmap software annotate function.
Chlorophyll fluorescence measurements
Chlorophyll fluorescence was measured using the FluorCam (from the Institute of Botany, Chinese Academy of Sciences). The plants were first dark adapted for 30 min before measurement, and the minimum fluorescence yield (Fo) was measured with measuring light. A saturating pulse of white light was applied to measure the maximum fluorescence yield (Fm). The maximal photochemical efficiency of PSII was calculated based on the ratio of Fv (Fm-Fo) to Fm. For the image analysis, the corresponding data from the plants were normalized to a false-color scale with an assigned extremely high value of 0.8 (red) and a lower value of 0.4 (blue).
Phylogenetic analysis and molecular modeling
The ATH protein sequences were used as queries to search against various species genomes in NCBI with BLASTP. Multiple sequences were submitted to the phylogenetic analysis tool NGPhylogeny.fr with default settings for sequence alignments. For phylogenetic tree construction, the FastME Output Tree was then uploaded to iTOL (version 5) for tree visualization. The 3D predicted structure of ATH was obtained from the AlphaFold Protein Structure Database ( https://alphafold.ebi.ac.uk ).
Expression and purification of recombinant proteins
The full-length CDS of ATH, ATH(R166K), Pol1A, and Pol1B were amplified and cloned into the pGEX-4T vector and expressed in Rosetta (DE3) cells. Rosetta (DE3) cells were grown at 37 °C until the OD600 reached 0.6, and then 0.5 mM IPTG was added. The cells were then held at 18 °C and incubated overnight (16 h) with shaking. After centrifugation at 4000 g, the harvested cells were re-suspended in 1x PBS and sonicated on ice until the suspension became transparent. The supernatant was collected and incubated with GST agarose resins (YEASEN, 20507ES50) at 4 °C for 4 h. The agarose resins were washed four times with 1x PBS and the proteins were eluted with elution buffer (50 mM reduced glutathione in 1x PBS). The quality of GST-ATH and GST-ATHR166K proteins was tested by SDS-PAGE gel, and the protein concentrations were measured with a Bradford Protein Assay Kit (Beyotime, P0006C).
Electrophoretic Mobility Shift Assay (EMSA)
Unlabeled and 3′-FAM-tagged synthetic oligonucleotides were used as probes (sequence: SGGASGGASGGASGGASGGASGGASGGA). The ATH and ATH(R166K) proteins (0.5 μg to 5 μg) were incubated with 20 pmol probes in 1x binding buffer (Beyotime, GS005) for 30 min, then the reaction mixture was separated on 8.5% native PAGE gel and visualized by Typhoon FLA9500.
Template-directed primer synthesis and RNA-primed DNA synthesis
Primase reactions were assayed with 100 nM ssDNA template (5′-(T) 7 GGGA(T) 7 -3′), 100 μM GTP, CTP, UTP, and 10 μCi of [γ- 32 P]-ATP (NEG502A) in a buffer containing 40 mM Tris–HCl pH 7.5, 50 mM potassium glutamate, 10 mM MgCl2 and 10 mM DTT. Each primase reaction contained varied amounts of recombinant protein as indicated in the figures. After incubation at 30 °C for 60 min, loading buffer (95% formamide, 0.1% xylene cyanol) was added to the reaction products, and then the products were separated on a 27% denaturing polyacrylamide gel containing 3 M urea. The autoradiographs were exposed to the phosphor screens for 2 days and then scanned using Typhoon FLA9500.
RNA-primed DNA synthesis were assayed with 100 nM ssDNA template (5′-(T) 5 A(T) 9 GGGGA(T) 10 -3′), 100 μM NTPs, 100 μM dATP, 100 μM dGTP, 100 μM dTTP, and 10 μCi of [α- 32 P]-dCTP (NEG513H) in a same buffer as above. Each primase reaction contained varied amounts of recombinant protein as indicated in the figures. After incubation at 30 °C for 60 min, the reaction products were detected as described above.
Helicase assay
ATH helicase reactions were assayed in a buffer containing 10 mM Tris–HCl pH 8.0, 8 mM MgCl2, 1 mM DTT, and 5 mM ATP. 25 nM or 10 nM 3′-FAM-tagged dsDNA (SGGASGGASGGASGGASGGASGGASGGA) were used as substrate. Each helicase reaction contained varied amounts of recombinant protein as indicated in the figure. After incubation at 37 °C for 2 h, loading buffer (95% formamide, 0.1% xylene cyanol) was added to the reaction products, and then the products were separated on a 9% native polyacrylamide gel and scanned using Typhoon FLA9500.
Protoplast transformation
The CDS of ATH and ATH(R166K) were cloned into pUC19-35S-eGFP. Protoplasts were extracted from 30-day-old leaves and transformed with plasmid-mediated by 20% PEG-Ca. After 16 h of transient expression, protoplasts were observed under fluorescence microscopy (Zeiss, LSM880).
Chloroplast and mitochondrion fractionation
Chloroplast sub-fractionation analysis was performed as previously described 58 with minor modifications. Briefly, 21-day-old plants were homogenized in CIB buffer (10 mM HEPES–KOH pH 8.0, 150 mM sorbitol, 2.5 mM EDTA pH 8.0, 2.5 mM EGTA pH 8.0, 2.5 mM MgCl 2 , 5 mM NaHCO 3 , and 0.1% BSA) on ice. The homogenate was further filtered through a double layer of Miracloth and centrifuged at 200 g/4 °C for 3 min. Then the supernatant was transferred to new 50 ml tubes and centrifuged for 10 min at 1200 g/4 °C to obtain intact chloroplasts. The chloroplasts were resuspended in buffer II (0.33 M sorbitol, 5 mM MgCl 2 , 2.5 mM EDTA pH 8.0, 20 mM HEPES–KOH pH 8.0) and buffer III (5 mM MgCl 2 , 25 mM EDTA pH 8.0, 20 mM HEPES–KOH pH 8.0), successively. After centrifugation at 4 °C, the stroma was in the liquid supernatant, while the sediment contained the thylakoid fraction.
Mitochondrial fractionation was performed as previously described 55 . 21-day-old plants were homogenized in ice-cold grinding buffer (0.3 M sucrose, 25 mM tetrasodium pyrophosphate, 1% (w/v) polyvinylpyrrolidone-40, 2 mM EDTA, 10 mM KH 2 PO 4 , 1% (w/v) BSA, 20 mM sodium l -ascorbate, 1 mM DTT, 5 mM cysteine, pH 7.5). The homogenate was filtered through 4 layers of Miracloth and centrifuged at 2,500 g/4 °C for 5 min, and the supernatant was then centrifuged at 20,000 g/4 °C for 15 min. The pellet was resuspended in washing buffer (0.3 M sucrose, 10 mM TES, pH 7.5), and repeated 1500 g and 20,000 centrifugation steps. The resulting pellet was gently resuspended in washing buffer and fractionated on a Percoll step gradient (18% to 27% to 50%) by centrifugation at 40,000 g for 45 min. Mitochondria were collected at the 27% to 50% interface and diluted with washing buffer. After centrifugation at 31,000 g/4 °C for 15 min, the mitochondrial pellet was collected for use.
Protein extraction and immunoblot analysis
Total protein was extracted with protein extraction buffer (50 mM Tris-HCl pH 7.4, 154 mM NaCl, 10% glycerol, 5 mM MgCl 2 , 1% Triton X-100, 0.3% NP-40, 5 mM DTT, 1 mM PMSF, and protease inhibitor cocktail). Anti-FLAG (Sigma, F1804), anti-GFP (ABclonal, AE012), anti-HA (Beyotime, AF5057), anti-plant-actin (ABclonal, AC009), anti-RPOB (PhytoAB, PHY1701), anti-PetA (PhytoAB, PHY0023), anti-RbcL (Agrisera, AS03037A), anti-PsaA (PhytoAB, PHY0053A), and anti-IDH (PhytoAB, PHY0098A) were used as primary antibodies, and goat anti-mouse (EASYBIO, BE0102) or goat anti-rabbit antibodies (EASYBIO, BE0101) were used as secondary antibodies.
GUS staining
The acs1 ATH::ATH-GUS transformed Arabidopsis plants were incubated with staining buffer (0.1 M K 3 [Fe(CN) 6 ], 0.1 M K 4 [Fe(CN) 6 ], 1 M NaH 2 PO 4 , 1 M Na 2 HPO 4 , 0.5 M EDTA (pH 8.0) and 20% methanol) under vacuum for 10 min and then overnight at 37 °C. After staining, the plants were washed three times with 100% alcohol and then photographed.
Seed clearing and observation
Seed clearing was performed as previously described 55 , 59 with minor modifications. In brief, developing siliques at various stages were fixed in ethanol:acetic acid (9:1) and washed with 70% ethanol. The seeds were isolated, mounted on slides in Hoyer’s medium (glycerol/water/chloral hydrate in a ratio of 1:2:8, v/v/w), and observed under a differential interference contrast (DIC) light microscope (Olympus, BX53).
Chloroplast isolation and chloroplast DNA extraction
Leaves of 21-day-old plants were harvested to extract intact chloroplasts using a chloroplast isolation kit (Invent, CP-011) following the instructions from the manufacturer. Briefly, plant leaves were added to a filter column containing 200 μl cold buffer A, and were gently ground with a grinding rod for 2 min on ice. The filter was capped and centrifuged at 2000 g for 5 min. The pellet was suspended in 200 μl cold buffer B and centrifuged at 2000 g for 10 min. The chloroplast was washed twice in CIB buffer before use. The chloroplasts were used for the comet assay, TUNEL assay, and immunofluorescence staining.
For PFGE, Slot-blot, DRIP, and cpChIP, chloroplasts were extracted by grounding plant leaves in 20 ml ice-cold CIB buffer (10 mM HEPES–KOH (pH 8.0), 150 mM sorbitol, 2.5 mM EDTA (pH 8.0), 2.5 mM EGTA (pH 8.0), 2.5 mM MgCl 2 , 5 mM NaHCO 3 , and 0.1% BSA). The homogenate was filtrated through two layers of Miracloth and centrifuged at 200 g/4 °C for 3 min. Then the supernatant was transferred to new 50 ml tubes and centrifuged for 10 min at 1200 g/4 °C. The pellet was then suspended with cold CIB buffer and was ready for use.
For chloroplast DNA extraction, chloroplasts were lysed in chloroplast DNA extraction buffer (CIB with 1% SDS and proteinase K) at 37 °C overnight with shaking, and then SDS was removed by adding 20 μM KAc. Chloroplast DNA was purified by phenol/chloroform/isoamyl alcohol (25:24:1, v/v/v) and precipitated with an equal volume of isopropyl alcohol at -20 °C overnight. The chloroplast DNA was dissolved in 1x TE.
DAPI staining and immunostaining
For DAPI staining, the intact chloroplast was fixed in 4% paraformaldehyde for 10 min at room temperature. Then the chloroplast was washed three times with 1x PBS and stained with DAPI. The stained chloroplast was observed by a confocal microscope (Zeiss, LSM880).
For immunostaining, the fixed chloroplasts were refixed on PLL-coated slides (CITOGLAS, 188105) with 4% paraformaldehyde for 10 min and then washed three times with 1x PBS. Then the chloroplasts were pretreated with RNase III for 30 min and washed with 1x PBS. The samples were then blocked with blocking buffer (1% BSA, 0.3% Triton X-100 in 1x PBS) for 20 min at room temperature and incubated with 100 μl of S9.6 antibody diluted in blocking solution (1:100) at 4 °C overnight. The slides were washed three times with 1x PBS and incubated with 100ul of secondary antibody for 1 h at room temperature. Then, the cells were washed three times washing with 1x PBS. 20ul of protective agents (Southern Biotech, 0100-20) were added to the slides and covered with cover glass. The observation was performed by a confocal microscope (Zeiss, LSM880).
Slot-blot hybridization analysis
The detailed Slot-Blot assay was described previously 20 with minor modifications. Briefly, 5 μg chloroplast DNA extracted from different samples was treated with 1 U of RNase III (NEB, M0245S) at 37 °C for 30 min, then purified with phenol/chloroform/isoamyl alcohol (25:24:1, v/v/v) and precipitated with an equal volume of isopropyl alcohol. The purified DNA was spotted on Hybond N+ membrane using a slot-blot apparatus and vacuum suction. The membrane was then crosslinked and blocked in 5% milk-TBST, and detected by S9.6 antibody (DNA:RNA hybrid-specific antibody).
Chloroplast chromatin immunoprecipitation (cpChIP)
cpChIP was performed as described previously 36 . The chloroplasts were cross-linked with 1% formaldehyde for 10 min, and the cross-linking reaction was stopped by adding 150 μl 1 M glycine and incubating for 10 min. Then the cross-linked chloroplasts were washed twice with CIB and lysed in lysis buffer (50 mM Tris-HCl (pH 7.6), 0.15 M NaCl, 1 mM EDTA (pH 8.0), 1% Triton X-100, 0.1% SDS, 0.1% sodium deoxycholate). Chloroplast DNA was sheared with sonication into fragments of ∼500-bp. The supernatant was incubated with an anti-GFP antibody (abcam, ab290) overnight at 4 °C. Plants without a GFP tag were adopted as the negative control. ChIP-qPCR was performed using the immunoprecipitated DNA and input DNA. Primers corresponding to four rDNA regions were used for detection and are listed in Supplementary Data 1 .
DNA:RNA hybrid immunoprecipitation (DRIP)
5 μg of chloroplast DNA was fragmented with 5 U of DdeI (NEB, R0175V), MseI (NEB, R0525S), RsaI (NEB, R0167V), and AluI (NEB, R0137V) at 37 °C for 12 h. RNase H pretreated atrnh1c cpDNA was used as the negative control. The fragmented DNA was then purified by phenol/chloroform/isoamyl alcohol (25:24:1, v/v/v) and precipitated with an equal volume of isopropyl alcohol. 2 μg of purified fragment DNA with or without RNase H treatment was incubated with 10 μg of S9.6 antibody overnight at 4 °C. Samples were further incubated with 50 μl Protein G beads (Invitrogen, 10004D) for 4 h at 4 °C. The immunoprecipitated DNA was purified as mentioned above. The primers used for DRIP-qPCR are listed in Supplementary Data 1 .
DEtail-seq
DEtail-seq assay was described previously 29 with minor modifications. Briefly, the chloroplasts were embedded in low-melting-point agarose and lysed in lysis buffer as described above. Agarose plugs were then washed in 1x TE buffer 4 times at 37 °C, with the first two washes containing 1 mM PMSF. Then, agarose plugs were cut into small pieces and lysed in 10 μg/ml RNase A at 37 °C overnight. The lysed agarose pieces were then washed in 1x TE buffer 5 times at 37 °C, followed by two washes with 300 μl 1x CutSmart (NEB, B7204S) at 37 °C. The agarose pieces were then incubated with 3 μl I-CeuI (NEB, R0699S) in 150 μl 1x CutSmart Buffer at 37 °C for 12 h and washed 3 times with 1x TE buffer. Then the T7 ligation process was conducted by adding 65 μl (T7 Buffer 8 μl, T7 Adapter 5 μl, T7 Enzyme Mix II 6 μl, Low-EDTA TE 46 μl) T7 Tailing & Ligation solution (ABclonal, RK20228) and incubating at 37 °C for 12 h. The DNA in agarose was purified using a DNA gel purification kit (Magen, D2111-02), and fragmented to ~250 bp using a focused ultrasonicator (Covaris, S220). The following library preparation steps were conducted according to the manual (ABclonal, RK20228).
In vitro removal of RNA primer by AtRNH1C
The assay was performed as previously described 55 . 100 nM FAM-labeled RNA:DNA hybrid (8-bp) was used as the substrate. The reaction was performed in a buffer containing 50 mM KCl, 4 mM MgCl2, 20 mM HEPES-KOH pH 7.0, 4% glycerol, 50 μg/ml BSA, and 1 mM DTT. After incubation with 100 to 400 nM purified GST-AtRNH1C or GST-GFP proteins for 30 min, the reactions were stopped by 20 mM EDTA. The products were separated on a 12% native polyacrylamide gel and then scanned using Typhoon FLA9500.
Single-cell gel electrophoresis assay (Comet assay)
The comet assay was performed as previously described 20 . Briefly, 10 μl of intact chloroplasts were mixed with 90 μl of LM Agarose at 37 °C, and 50 μl of each sample was added to the CometSlide. Slides were incubated at 4 °C in the dark for 20 min for solidification and then incubated in lysis solution (R&D, 4250-050-01) overnight at 4 °C in the dark. The slides were further incubated in neutral electrophoresis buffer (50 mM Tris and 150 mM sodium acetate, pH 9.0) for 30 min. For electrophoresis, the slides were run at 1 V/cm for 20 min in neutral electrophoresis buffer. Slides were then incubated in DNA precipitation solution (1 M NH 4 Ac in 95% ethanol) for 30 min and 70% ethanol for another 30 min. After drying at 37 °C, the slides were stained with SYBR Green for 30 min followed by a water wash. Samples were visualized by epifluorescence microscopy (Olympus, BX53) at 488 nm. The OpenComet tool 60 launched from ImageJ software was used for the analysis and quantification of the results.
Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL assay)
TUNEL assay was performed using a TUNEL Apoptosis Assay Kit-FITC (7sea, AT005-1) according to the manufacturer’s instructions. Briefly, the intact chloroplasts were extracted and fixed onto PLL-coated slides. Then the samples were blocked by blocking buffer (1% BSA, 0.3% Triton X-100 in 1x PBS) for 20 min at room temperature. The slides were rinsed with 1x PBS three times and then incubated with TdT reaction mixture for 1 h at 37 °C. After three washes with 1x PBS, chlorophyll autofluorescence and TUNEL fluorescence were captured under confocal microscopy (Zeiss, LSM880). The fluorescence intensity was quantified by ImageJ software.
EdU labeling of Arabidopsis chloroplast
21-day-old plants were transferred to liquid 1/2 MS medium with 20 μM EdU (BeyoClickTM EdU Cell Proliferation Kit with Alexa Fluor 488) and grown in the chamber for 17 h. Then the plants were washed with 1x PBS three times and fixed in 4% paraformaldehyde in 1x PBS for 10 min. After being fixed, the plants were washed twice with 1x PBS, and a MinuteTM Chloroplast Isolation Kit (CP-011, Invent) was used to extract intact chloroplasts. The chloroplasts were fixed in 4% paraformaldehyde again on PLL-coated slides for 30 min and covered with 0.3% Triton X-100 in 1x PBS for 20 min at room temperature. Then the slides were incubated with 50 μl Click-iT® reaction cocktail (43 μl of 1x Click-iT® EdU reaction buffer, 2 μl of CuSO 4 , 0.1 μl of Alexa Fluor® azide, and 5 μl of 1XClick-iT® EdU buffer additive) for 1 h at room temperature. Then, the cells were washed three times with 1x PBS, and 10 μl protective agent (Southern Biotech, 0100-20) covered with cover glass. The observation was performed by a confocal microscope (Zeiss, LSM880).
Two-dimensional gel electrophoresis of replication intermediates (2D-gel)
Two-dimensional gel electrophoresis was performed as described previously 20 . A total of 20 μg cpDNA from each sample was digested with 10 U of the restriction enzymes (AseI and BglI) and precipitated with isopropanol. For the first-dimension gel electrophoresis, the digested DNA was loaded onto a 0.3% agarose gel without ethidium bromide, in 0.5x TBE buffer at 0.7 V/cm for 30 h. The second-dimension gel electrophoresis was performed in a 1% agarose gel containing 0.3 μg/ml ethidium bromide, at 6 V/cm for 5 h at 4 °C. DNA was transferred onto Hybond N+ membrane (GE, RPN303B) according to standard DNA gel blotting methods. The blots were hybridized to radiolabeled probes labeled with [α- 32 P]-dCTP (NEG513H) using a Random Primer DNA Labeling Kit Ver. 2 (Takara, 6045). The autoradiographs were exposed to the phosphor screens for 10 days and scanned by Typhoon FLA9500.
Pulsed-field gel electrophoresis (PFGE)
The chloroplasts suspended in CIB were mixed with 1% low-melting point agarose (Promega, V2111) dissolved in TE buffer (1:1, v/v) at 37 °C. The plugs were solidified at 4 °C for 30 min and then lysed in lysis buffer (1% sarkosyl, 0.45 M EDTA, 10 mM Tris-HCl (pH 8.0) and 2 mg/mL proteinase K) at 48 °C for 16 h with shaking. The lysis buffer was exchanged three times. Agarose plugs were then washed in 1x TE buffer 6 times at 4 °C, with the first two washes containing 1 mM PMSF, filled into 1% agarose gel, and subjected to electrophoresis in 0.5x TBE for 42 h at 14 °C using a CHEF Mapper XA system (Bio-Rad). A Lambda Ladder (New England Biolabs; N0341) was used to indicate the molecular weight. The detailed electrophoresis parameters were 5 to 120 s of pulse time at 4.5 V/cm. After EtdBr staining and photography, the gel was blotted onto a Hybond N+ membrane (GE, RPN303B) according to standard DNA gel blotting methods. A 505-bp fragment of the chloroplast rbcL gene (55677–56181) was labeled with [α- 32 P]-dCTP (NEG513H) using Random Primer DNA Labeling Kit Ver. 2 (Takara, 6045) and used as a probe for hybridization. The autoradiographs were exposed to the phosphor screens for 5 days and then scanned using Typhoon FLA9500.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | Results
A point mutation in ATH rescues the developmental defects of atrnh1c
The chloroplast-localized ribonuclease AtRNH1C plays a key role in maintaining plastid genome stability in Arabidopsis, and atrnh1c displays pale yellow leaves, which is caused by aberrant R-loop accumulation and genome degradation 20 . To investigate new mechanisms regulating HO-TRCs, we conducted an ethyl methanesulfonate (EMS) mutagenesis screen and identified a suppressor of atrnh1c , named acs1 ( atrnh1c suppressor 1 ). Compared with atrnh1c , acs1 showed the recovery of growth defects and leaf color in all stages of plant development, and the quantum efficiency of photosystem II (Fv/Fm), chlorophyll contents, the number and morphological features of chloroplasts in cells, and plant fresh weight all recovered to normal states (Fig. 1a, b and Supplementary Fig. 1 ).
With SHORE-map analysis 22 using the phenotypic recovered plants, we mapped a G to A point mutation in the fourth exon of At1g30680 that was highly correlated with phenotypic variation (Supplementary Fig. 2a , see methods). At1g30680 encodes a DNA primase-helicase with dual localization potential (chloroplast and/or mitochondria, Fig. 1c ), for synthesizing RNA primers during organellar DNA replication 6 . Immunoblot analysis showed that the expression levels of chloroplast proteins RPOB, PetA, and RbcL were significantly reduced in atrnh1c compared to Col-0, while there was no difference in the expression of mitochondrial protein IDH (Supplementary Fig. 2b ). In line with the phenotype, the levels of chloroplast proteins in the acs1 double mutant were remarkably recovered to the level of Col-0 (Supplementary Fig. 2b ). By constructing complementary vectors and genetic transformation, we confirmed that the mutation in At1g30680 is responsible for the phenotypic recovery in acs1 (Fig. 1a, b and Supplementary Fig. 1 ).
The ATH protein contains a ZBD domain, an RNA polymerization domain (RPD), and a helicase domain, and the point mutation in At1g30680 leads to an amino acid change (R166K) in the ZBD domain (Fig. 1c ). By protein structure prediction of Alphafold, we found that R166 is located at the end of a highly confidently predicted β-sheet (Supplementary Fig. 2c ). Previous studies indicated that R166 might be vital for the ssDNA template recognition of ATH 14 , and multiple sequence alignment revealed that R166 is conserved across land plants (Fig. 1d ). EMSA and ChIP-qPCR assays showed that the R166K mutation in ATH indeed caused a significant reduction in the binding capacity to template DNA (Fig. 1e, f ). Further genetic approaches confirmed that the complementary sequence with the point mutation could not complement the phenotype of acs1 , confirming that the R166K mutation in ATH determines the phenotypic recovery in acs1 (Supplementary Fig. 2d, e ).
To directly assess the effect of the R166K mutation on primase activity, we compared the RNA primer synthesis ability of wild-type ATH and ATH(R166K) in vitro, and the results showed that the R166K mutation decreases primer synthesis (Supplementary Fig. 2g ). We also detected the ATH and ATH(R166K) mediated RNA-primed DNA synthesis by Pol1A and Pol1B, the DNA polymerases located in plant organelles. When the wild-type ATH was used as primase in the reactions, the amount of synthesized DNA gradually increased as the amount of ATH increased. However, the DNA synthesis was even hard to detect using ATH(R166K) as the primase with equimolar amounts (Supplementary Fig. 2h ). Moreover, we compared the helicase activities of the wild-type protein ATH and the mutant protein ATH(R166K), and found that the R166K mutation could affect the helicase activity of ATH (Supplementary Fig. 2i ). In conclusion, these results indicate that the R166K mutation in ATH reduces the primer synthesis and delivery abilities as a primase.
ATH is essential for plant development
To further investigate the functions of ATH, we analyzed its subcellular localization. Using transgenic plants of ATH-GFP, we found that the ATH protein displays a punctate distribution in chloroplasts (Fig. 2a, b ). Protoplast transformation or tobacco leaf infection assays also showed that ATH is distributed in a punctiform manner in chloroplasts (Supplementary Fig. 3a, b ), and the R166K mutation does not change the distribution pattern of ATH (Fig. S3A). The ATH protein could also be detected in mitochondria (Figs. S3A and 3C ), which is consistent with previous results 23 . Western blot assays revealed that the ATH protein is mainly localized in the thylakoids of chloroplasts and mitochondria (Fig. 2c ). The RNase H1 protein AtRNH1C was also found to be enriched within the thylakoids, where the nucleoids are located 20 . By protoplast co-transformation, we observed a high co-localization distribution of ATH and AtRNH1C in chloroplasts (Fig. 2d, e ). The co-localization of ATH with AtRNH1C on the chloroplast thylakoids implies that there might be some functional correlations between them.
Considering the bidirectional localization property of ATH, to exclude the effect of mitochondrial localization of ATH on the phenotype, we replaced the ATH signal peptide with the reported signal peptides of AtRNH1C and RecA1 proteins, which were specifically localized in chloroplasts, and the genetic complementation experiments showed that ATH specifically localized to chloroplasts could also effectively recover the phenotype of acs1 (Supplementary Fig. 3d ).
We further observed the tissue distribution of ATH using transgenic complementary plants with GUS (β-glucuronidase) fusion. The ATH protein was mainly distributed in young meristematic tissue regions such as SAM (shoot apical meristem), young leaves, root tips, inflorescences, and young siliques (Supplementary Fig. 4 ). These tissues are the places where active DNA replication is present, and thus we speculate that this expression property of ATH is highly correlated with its function in DNA replication.
The mutant with the R166K mutation in the Col-0 background (named ath-1 ) exhibited a growth retardation phenotype in the early growth stage compared to Col-0 (Supplementary Fig. 5a ), but with growth, the difference between the two is no longer obvious (Supplementary Fig. 5b ). A previous study showed that a T-DNA insertion mutant (SALK_152246) of ATH gene displays a phenotype similar to the wild-type plants (Supplementary Fig. 5b ) 8 . We examined the position of T-DNA insertion in the SALK_152246 line by sequencing, and found that the T-DNA insertion occurs in the 5′ UTR region of the ATH gene (Supplementary Fig. 5c, d ). RT-qPCR assays also showed that the T-DNA insertion does not affect the ATH expression (Supplementary Fig. 5e ). We then tried to create knockout mutants of ATH by the CRISPR-Csa9 system 24 to further verify its function. However, we were unable to obtain any ath homozygous knockout mutants, either in Col-0 or atrnh1c background. Instead, we obtained several lines of ath +/− heterozygotes (Supplementary Fig. 6 ). Therefore, the knockout mutation of ATH might be embryonic lethal. We observed the seeds inside the siliques of ath +/− heterozygous deletion plants and found that some of the seeds were white, turning brown and crumbling as they developed (Fig. 3a ). The percentage of these problematic seeds in the siliques was calculated to be approximately 25%, while the seeds in either Col-0 or atrnh1c were completely normal (Fig. 3b ). By genotyping, we found that the lethal seeds were ath homozygous (Fig. 3c ). Further observation found that the development of problematical seeds stayed at the globular stage (Fig. 3d ). These results indicated that ATH is indispensable for plant development and that knockout of ATH leads to embryonic lethality.
We analyzed the phenotypes of the CRISPR/Cas9-generated ath +/− heterozygous and 1cath +/− plants, and the 1cath-3 +/− plants displayed partially rescued yellowish phenotypes compared to the atrnh1c mutant, while the phenotype of 1cath-5 +/− plants recovers slightly (Supplementary Fig. 6f ). The recovery could be because of the loss of ATH protein levels, as the heterozygous mutants carry only one copy of functional ATH gene (Supplementary Fig. 6g ). Furthermore, by S9.6 slot-blot assay, we found a significant decrease of R-loops in the chloroplast of 1cath +/- plants compared to that of atrnh1c (Supplementary Fig. 6h ).
acs1 relieves HO-TRCs and restricts R-loop accumulation
To investigate the mechanism of phenotypic recovery caused by the ATH(R166K) mutation, we first examined the R-loop levels in acs1 by chloroplast S9.6 slot-blot, DRIP-qPCR, and immunostaining. It has been previously reported that S9.6 may not be optimal for immuno-staining to detect RNA:DNA hybrid due to possible interference from dsRNA. However, binding affinity measurements in new studies showed S9.6 exhibits specificity for DNA-RNA hybrid over dsRNA 25 , 26 , and our previous immuno-staining assays in chloroplast also showed dsRNA does not affect the R-loop recognitions specificity of S9.6 antibody 21 , 27 . The results all showed a significant decrease in R-loop levels in acs1 chloroplasts compared to atrnh1c , while R-loop levels in complementary materials were comparable to those in atrnh1c (Fig. 4a–c ). As R-loop accumulation triggers chloroplast genome instability in atrnh1c 20 , we investigated whether the decreased level of R-loops in acs1 could relieve genome degradation in the chloroplast. The PFGE (pulsed-field gel electrophoresis) results showed that consistent with previous results 20 , the monomeric and oligomeric cpDNA (chloroplast DNA) molecules degraded dramatically in atrnh1c compared with Col-0, and these forms were significantly restored in acs1 , with a remarkable decrease in degraded DNA molecules (Fig. 4d ). TUNEL (terminal deoxynucleotidyl transferase dUTP nick end labeling) and neutral comet assays also confirmed the decrease in DNA damage in acs1 compared to atrnh1c (Supplementary Fig. 7a, b ).
We then analyzed the transcription and replication states in acs1 . Bioanalyzer (Agilent 4200) results showed that the mature cp-rRNAs were recovered in acs1 compared to atrnh1c (Fig. 4e ). The expression of chloroplast rRNA transcription intermediates and mature rRNAs was also calculated by RT-qPCR, and the results showed that the level of rRNA transcripts in acs1 chloroplasts was significantly elevated compared to that in atrnh1c (Supplementary Fig. 7c ). We next applied EdU (5-ethynyl-2′-deoxyuridine) staining to assess the replication state in the chloroplast 21 . As with the phenotypic changes, significant DNA replication stress alleviation was observed in acs1 compared to atrnh1c (Fig. 4f ). We then conducted a DAPI staining method to analyze DNA replication states by investigating patterns of nucleoids 28 . In line with previous results, a large percentage of chloroplasts from atrnh1c owned large extended nucleoids (type III), whereas most nucleoids in acs1 displayed scattered distribution (type I or II, Supplementary Fig. 7d ), indicating that the chloroplast genome replication stress was mitigated in acs1 comparing to atrnh1c . Furthermore, 2D gel electrophoresis showed that the replication intermediates of the cp-rDNA region were recovered in acs1 compared with atrnh1c (Supplementary Fig. 7e ). Taken together, these results imply that the R166K point mutation of ATH alleviates the excessive accumulation of R-loops in atrnh1c , thereby partially restoring the defects of transcription and replication, and maintaining genomic stability in the chloroplast.
HO-TRCs cause single-strand DNA breakage at the end of transcription units
As HO-TRCs could result in R-loop accumulation and DNA damage, we then analyzed the DNA damage sites in the chloroplast genome by DEtail-seq (DNA End tailing and sequencing), a method we recently developed that can detect 3′ end damage sites with strand-specific information 29 . Along with this experiment, we also analyzed the distribution of the R-loop in the chloroplast genome by ssDRIP-seq 30 , together with the binding profile of ATH by ChIP-seq. Compared with Col-0, the strand breaks in the atrnh1c chloroplast genome were significantly increased, while the damages in the acs1 genome were decreased compared to those in atrnh1c , with only high signals present in the high transcription-replication collision regions, where R-loop levels were highly accumulated (Fig. 5a–c ). Although the DNA breaks were recovered in the double mutant acs1 compared to atrnh1c , they were still higher than those in Col-0 (Fig. 5a–c , Supplementary Fig. 8a–c ). Of particular note, single-strand DNA breaks (SSBs) were dramatically enriched on the strand that templates transcription, especially at the end of transcription units, where high HO-TRCs happened (Fig. 5a–c , Supplementary Fig. 8a–c ). This pattern of strand-specific SSBs could also be detected in wild-type Col-0 (Fig. 5a–c , Supplementary Fig. 8a–c ), although the level was much lower than that in acs1 . These results suggested that the primase ATH is mainly for lagging strand replication, and enhances HO-TRCs thus leading to R-loop accumulation and genome instability through HO-TRCs in high transcription regions of the chloroplast genome (Fig. 5d ), and the R166K mutation in ATH slows down DNA replication, relieves HO-TRCs and rescues genome integrity in atrnh1c .
Another noteworthy phenomenon is that the overall distribution pattern of chloroplast DNA breaks in atrnh1c is similar to the pattern of the ATH binding profile (Fig. 5a ). Previous studies indicated that RNase H is involved in the removal of RNA primers 31 – 33 , and RNA primers were excessively accumulated in RNase H mutants 31 . We hypothesize that in atrnh1c , the inability to remove RNA primers from DNA templates leads to the over-accumulation of short RNA:DNA hybrids, which triggers extensive breaks in the chloroplast genome. To test this speculation, we examined the degradation ability of AtRNH1C for RNA primers in vitro, and the results showed that AtRNH1C was able to efficiently remove small fragments of RNA from the DNA template (Supplementary Fig. 8d ). Thus, these results indicated that in chloroplasts, AtRNH1C may also function to scavenge RNA primers generated during DNA replication in addition to degrading transcriptionally generated R-loops (Fig. 5d ), and the massive accumulation of RNA primers may be an important trigger for extensive breaks in the chloroplast genome of atrnh1c mutant.
ATH antagonizes R-loop clearance machinery to strengthen HO-TRCs and boost DNA damage
To further prove the working model, we artificially overexpressed ATH in Col-0 and a cs1 , and both resulted in the pale-yellow young leaves of the plants (Fig. 6a ), with the number of chloroplasts in cells and the quantum efficiency of photosystem II (Fv/Fm) significantly reduced (Fig. 6b, d, e ). The effects became more pronounced as the plants grew and developed (Supplementary Fig. 9a ). Consistent with the phenotype, in the ATH overexpression plants, the R-loops were over-accumulated in chloroplasts (Fig. 6c, f–h ), and the stability of the chloroplast genome was significantly reduced (Fig. 6i , Supplementary Fig. 9b ), companied by a much higher level of DNA damage (Fig. 6j , Supplementary Fig. 9c ). In addition, overexpression of ATH also led to transcription and replication inhibition, showing a decrease in cp-rRNA transcription and DNA replication (Supplementary Fig. 9d–g ).
S9.6 immunostaining of ATH-GFP overexpression plants showed that the GFP signals were highly co-localized with the S9.6 signals in the chloroplast nucleotides (Fig. 6k ). Furthermore, we analyzed the co-localization of ATH with RNA:DNA hybrids by co-transforming ATH-GFP with inactive AtRNH1C(D222N)-mCherry, and results showed that the co-localization of the two proteins can be seen in the nucleoids (Supplementary Fig. 9h ). Overexpression of AtRNH1C in ATH overexpressing plants alleviated the phenotype of yellowish leaves (Supplementary Fig. 10 ). These results showed that the growth defects caused by overexpression of ATH could be relieved by overexpressing AtRNH1C, further confirming an antagonistic function of primase ATH and R-loop clearance machinery that AtRNH1C involved.
Our previous study found that the R-loop helicase RHON1 is also involved in R-loop clearance to maintain chloroplast genome integrity, and the atrnh1c / rhon1 double mutant ( 1crhon1 ) displays more severe phenotypic defects than atrnh1c 21 . To investigate whether the R166K mutation in ATH can also rescue the phenotype of 1crhon1 , we crossed the rhon1 and acs1 mutants to generate the rhon1acs1 triple mutant (Supplementary Fig. 11a ). Compared with 1crhon1 , the plant size, photosystem II efficiency, and the number of chloroplasts per cell of the rhon1acs1 triple mutant were all partially restored (Supplementary Fig. 11a–d ). S9.6 slot-blot and immunostaining showed a decrease in the R-loop level in the rhon1acs1 triple mutant compared to 1crhon1 (Supplementary Fig. 11e, f ), and the TUNEL assay indicated that the DNA damage in the rhon1acs1 triple mutant was weaker than that of 1crhon1 (Supplementary Fig. 11g ). The transcription and replication levels in the rhon1acs1 triple mutant also recovered (Supplementary Fig. 11h, i ). All these results indicated that the R166K mutation of ATH can also rescue the phenotype of the 1crhon1 double mutant by restricting HO-TRCs thus relieving R-loop accumulation and HO-TRCs. Naturally, the primase ATH and R-loop clearance machinery are antagonistic to each other to balance HO-TRCs and genome integrity.
Mutation of Pol1A also can rescue the growth defects of atrnh1c by restricting HO-TRCs in chloroplasts
Replication of the chloroplast genome requires the involvement of two DNA polymerases, Pol1A and Pol1B 34 . Previous studies showed that Pol1B is not only involved in replication but also has functions of DNA repair 35 , and the double mutant of pol1b and atrnh1c displayed more severe chloroplast genome degradation and growth defects than atrnh1c 36 . To further investigate the effect of DNA replication with R-loop accumulation and genome stability, we constructed a pol1a and atrnh1c double mutant ( 1cpol1a ) (Supplementary Fig. 12a ). Compared with atrnh1c , the growth defects, photosystem II efficiency, chlorophyll contents, and chloroplast number per cell in leaves of the 1cpol1a double mutant were partially restored (Fig. 7a–e ). DRIP-qPCR and S9.6 immunostaining also showed the R-loop level in the 1cpol1a double mutant decreased compared to atrnh1c (Fig. 7f, g , and Supplementary Fig. 12b ). In line with the phenotype, TUNEL assay, PFGE, and Detail-seq results showed that the DNA damage and genome degradation in 1cpol1a also decreased than that of atrnh1c (Fig. 7h, i , and Supplementary Fig. 12c–e ). These results further confirmed that weakening DNA replication could relieve HO-TRCs and maintain genome integrity. | Discussion
Transcription and replication are the most essential events of living organisms to sustain life, and they rely on the same genome as the template. According to the coding strand of gene transcription relative to the movement of the replisome, the transcription and replication machinery either move head-on or codirectionally, which determines the pattern of transcription-replication conflicts. HO-TRCs induce R-loops and compromise replication and the expression of head-on genes, thus triggering DNA breaks and genome instability 37 – 41 . In bacteria, genomes have evolved to organize the majority of genes expressed codirectionally with replication forks, thus avoiding head-on collisions 42 . However, this is not the case in semiautonomous chloroplasts.
Previous results found that the replication origins in the chloroplast genome are located inside the highly transcribed rDNA regions in the IR regions 43 . This localization feature creates natural HO-TRCs (Fig. 5d ). As the rDNAs are the highest transcribed regions in the chloroplasts, and with two origins located inside, this could induce much higher risks of head-on transcription-replication conflicts. By investigating multiple pathways that restrict HO-TRCs-promoted R-loops, we previously found that AtRNH1C and RHON1 synergistically restrict R-loops and release transcription and replication, thus supervising chloroplast genome integrity and ensuring the normal development of plants 20 , 21 . Here, through a phenotypic suppressor screen of atrnh1c , we found that an R166K mutation in the ZBD domain of the chloroplast-localized primase ATH can rescue the growth defects in atrnh1c . Further investigation revealed that the R166K mutation decreases the RNA primer synthesis and delivery activities, slowing down the replication machinery and relieving HO-TRCs, thus reducing R-loops and DNA damage in the chloroplast genome. These results reveal that the chloroplast primase ATH plays a vital role in R-loop coordination and genome integrity maintenance. Furthermore, mutation of Pol1A, one of the two DNA polymerases in plant organelle, can also lead to similar effects in rescuing the defects in atrnh1c mutant. These results indicate that the HO-TRCs can be mitigated by reducing the DNA replication speed, which may be a common mechanism across species.
By DNA breaks sequencing in the chloroplast, we found the DNA breaks were enriched in the lagging strand, especially at the end of transcription units in Col-0 and acs1 . This pattern of strand-specific DNA breaks suggested that the primase ATH, which is mainly for lagging strand replication, enhances R-loop accumulation and genome instability at the HO-TRCs regions. We also found the distribution pattern of DNA breaks in atrnh1c is similar to the binding pattern of ATH protein (Fig. 5a ). Previous studies revealed that RNA primers were excessively accumulated in RNase H mutants 31 . By testing the ability of AtRNH1C to digesting RNA primers, we found that AtRNH1C was able to efficiently remove small fragments of RNA from DNA template (Supplementary Fig. S8d ). Thus, we hypothesize that in chloroplasts, AtRNH1C can also degrade RNA primers from DNA templates during replication. In atrnh1c , the inability to remove RNA primers from DNA templates leads to the over-accumulation of RNA:DNA hybrids, which triggers extensive breaks in the chloroplast genome. Indeed, over-expression of ATH in wild type and acs1 also leads to more DNA breaks in the chloroplast genome and plant growth defects, which can be rescued by over-expression of AtRNH1C synchronously (Fig. 6 , Supplementary Fig. 9 , and Supplementary Fig. 10 ). These genetic and sequencing results further confirmed the hypothesis that AtRNH1C could remove the RNA primers during replication, especially in the RNA primers enriched lagging strand.
Strand-specific mutations occur in the process of DNA replication and transcription. Discontinuous synthesis of the lagging strand during replication produces a series of Okazaki fragments, the 5′ ends of which have increased levels of nucleotide substitution 44 . Additionally, longer exposure as ssDNA may cause the lagging strand to be more vulnerable to mutagens 45 . Mutational asymmetry also occurs on the transcribed and non-transcribed strands during transcription 46 – 48 . In the B cell genome of mammals, localized RNA processing protein complex determines the strand-specific mutations, which catalyze proper antibody diversification 49 , 50 . In the genomes of tumors, transcription-coupled damage on the non-transcribed DNA strand and replication-coupled mutagenesis on the lagging-strand template have been detected, and these widespread asymmetric mutations have been proposed to potentially lead to cancer 51 . Our findings of strand-specific single-strand DNA breaks in chloroplast genome provide new insights into the molecular mechanisms of strand-specific mutations that occur in a broad range of diseases.
Previous findings confirm that ATH is a bona fide primase indispensable for plant organellar DNA synthesis resembling gp4 in bacteriophage T7, which is essential for the processive replication of phage DNA 14 . Knockout of Twinkle leads to depletion of mitochondrial DNA and lethality in humans and mice 12 , 52 , 53 . Our work also shows the null mutations of ATH lead to embryo lethality in Arabidopsis. These findings further confirmed the key roles of ATH in DNA replication and genome integrity maintenance of chloroplasts.
Dual localization in chloroplasts and mitochondria is a common feature of many organelle proteins encoded by the nucleus 23 , 54 , 55 . By subcellular localization analysis and immunoblots, we confirmed that the ATH protein also has bidirectional localization properties. Through further studies, we found that the R166K mutation in ATH can weaken R-loop accumulation and enhance genome stability in the chloroplast genome of the atrnh1c mutant. Since AtRNH1C is a protein specifically localized in chloroplasts, we speculate that the effect of ATH mutation is mainly present in chloroplasts. Immunoblot analysis showed that the expression levels of chloroplast proteins were recovered in acs1 compared to atrnh1c , while there was no obvious difference in the expression of mitochondrial protein IDH (Supplementary Fig. 2b ). Previous studies showed that DNA maintenance in mitochondria was mainly through the high levels of homologous recombination (HR)-based replication 56 , and probably initially started the replication with RNA synthesized by RNA polymerase as the primer 57 . In addition, changing the dual localization ATH signal peptide with signal peptides that are specifically localized in chloroplasts could also rescue the phenotype of acs1 (Supplementary Fig. 3d ). Thus, the importance and function of ATH in mitochondria remain to be further investigated. | Transcription-replication conflicts (TRCs), especially Head-On TRCs (HO-TRCs) can introduce R-loops and DNA damage, however, the underlying mechanisms are still largely unclear. We previously identified a chloroplast-localized RNase H1 protein AtRNH1C that can remove R-loops and relax HO-TRCs for genome integrity. Through the mutagenesis screen, we identify a mutation in chloroplast-localized primase ATH that weakens the binding affinity of DNA template and reduces the activities of RNA primer synthesis and delivery. This slows down DNA replication, and reduces competition of transcription-replication, thus rescuing the developmental defects of atrnh1c . Strand-specific DNA damage sequencing reveals that HO-TRCs cause DNA damage at the end of the transcription unit in the lagging strand and overexpression of ATH can boost HO-TRCs and exacerbates DNA damage. Furthermore, mutation of plastid DNA polymerase Pol1A can similarly rescue the defects in atrnh1c mutants. Taken together these results illustrate a potentially conserved mechanism among organisms, of which the primase activity can promote the occurrence of transcription-replication conflicts leading to HO-TRCs and genome instability.
Resolving R-loops caused by transcription-replication conflicts (TRCs) is vital to genome stability in organisms. Here, the authors show that the chloroplast-localized primase ATH intensifies template strand competition and exacerbates the Head-On TRCs induced DNA damage.
Subject terms | Supplementary information
Source data
| Supplementary information
The online version contains supplementary material available at 10.1038/s41467-023-44443-0.
Acknowledgements
The authors thank all the members of The Sun Lab and Professor Jie Ren (from Beijing Institute of Genomics, Chinese Academy of Sciences) for their helpful discussions and constructive suggestions. We thank Mrs. Dan Zhang and Mrs. Fang Liu (from the Center of Biomedical Analysis, Tsinghua University) for their assistance with confocal observation and Bioanalyzer analysis, respectively. We thank Mrs. Yan Yin (from the Institute of Botany, Chinese Academy of Sciences) for assisting with the measurement of chlorophyll fluorescence using FluorCam. This work was funded by grants from the National Natural Science Foundation of China (grants 32261133529 and 32170321 to Q. Sun, and 32070651 to W. Zhang). The Sun Lab is supported by the Tsinghua-Peking Center for Life Sciences. W. Zhang is supported by the China Postdoctoral Science Foundation Project (2019M660610) and the postdoctoral fellowship from Tsinghua-Peking Center for Life Sciences.
Author contributions
Q.S. conceived the study and designed the experiments with W.Z.; Z.Y. conducted the EMS mutagenesis screen of atrnh1c suppressors; W.W. assisted W.Z. in the DEtail-seq and PFGE assays; W.Z. performed the rest of the experiments. W.Z. and Q.S. wrote the manuscript, and all authors read and approved the final manuscript.
Peer review
Peer review information
Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.
Data availability
The sequencing data generated in this study have been deposited into NCBI’s Gene Expression Omnibus (GEO) database and are accessible through the GEO Series accession number GSE215443 . Source data are provided with this paper.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:02 | Nat Commun. 2024 Jan 2; 15:73 | oa_package/68/8f/PMC10761990.tar.gz |
|
PMC10762076 | 38167300 | Introduction
Much like classical anions, electrons can behave as solutes in solution. In water, such hydrated electrons (e – (aq) ) have attracted much attention as fundamental quantum solutes and because of their role in radiation chemistry 1 , 2 . The structure and dynamics of e – (aq) has been a topic of much debate 3 , with key outstanding questions relating to solvation at the water/air interface 4 . Specifically, does the electron’s charge distribution reside predominantly above or below the water surface, and how long does the electron at the water/air interface (e – (aq/air) ) remain near the surface? These questions are pertinent because, in many instances, e – (aq) is expected to be found at interfaces, with implications ranging from atmospheric, interstellar and radiation chemistry to quantum solvation, interfacial charge-transfer and plasma processes 1 , 5 – 9 . As a specific example, e – (aq/air) has been implicated in the recently observed enhancement of reactivity in microdroplets, where the electron is assumed to diffuse rapidly into the bulk 10 .
The consensus view of the structure of e – (aq) is one where the electron density predominantly resides within a cavity or excluded volume in the water structure 3 , 11 . It can be conceptualized as an electron in a quasi-spherical box with an electronic ground state defined by a nodeless s-type orbital. Its first excited states are three p-type states and p ← s photo-excitation accounts for much of the optical absorption spectrum, which is the most characteristic observable of e – (aq) 3 , 12 , 13 . But how does this cavity structure change at the water/air interface? There have been conflicting views built upon photoelectron spectroscopy of water cluster anions, where experiments demonstrate the existence of differing binding motifs for the electron 14 , 15 . Some clusters correlate with embryonic forms of e – (aq) , where most of the electron distribution resides below the surface (inside the cluster) while other motifs are more weakly bound, consisting of a partially hydrated electron with most of its electron distribution protruding into the vapor phase 16 , 17 . Experiments on clusters deposited on cold metal surfaces found evidence for the latter 18 , as did an early photoelectron spectroscopy experiment of a water microjet 19 . However, the signal attributed to interfacial e – (aq) is much shorter lived in other microjet experiments 20 , consistent with an excited state 3 . Recent heterodyne-detected vibrational sum-frequency generation (SFG) spectroscopy of the ambient water/air interface suggests a partially hydrated electron 21 , but electronic second-order non-linear spectroscopy at specific wavelengths appears to show kinetics that are broadly consistent with those for e – (aq) buried in the interface 22 , 23 . The theoretical consensus is that e – (aq/air) has most of its electron density in the aqueous phase 24 – 26 .
Significant experimental effort has been devoted to measuring the vertical detachment energy using photoelectron spectroscopy because this quantity can distinguish between the two binding motifs. Such measurements are not readily transferable to an ambient water/air interface, however. In experiments using liquid microjets, which are proxies for the ambient water/air interface, there have been contrasting results 19 , 20 , 27 – 32 . The electronic absorption spectrum, on the other hand, has been the defining experimental feature of e – (aq) 33 – 36 . Its measurement at the ambient water/air interface has not been reported, although it is expected to be sensitive to surface localization 37 , 38 . Here, we use time-resolved electronic SFG spectroscopy to measure the spectrum and subsequent solvation dynamics of e – (aq/air) , thereby directly addressing the two key outstanding questions related to the solvation of electrons at the water/air interface.
Formation and spectroscopy of e – (aq/air)
SFG relies on the second-order non-linear response of a material to an electromagnetic field 39 – 41 . In the electric dipole approximation, this response is only finite where centro-symmetry is broken, which is necessarily the case at the interface between isotropic phases such as water and air. Therefore, two driving fields with frequencies ω 1 and ω 2 will combine to generate the SFG field with frequency ω SFG = ω 1 + ω 2 , exclusively from the interface. The field ω SFG can be enhanced when any of the three fields ( ω 1 , ω 2 or ω SFG ) are resonant with an optical transition of the interfacial species. In the present experiments, both e – (aq) and e – (aq/air) were generated by photo-excitation of phenoxide anions using a pump pulse ω pump ( λ = 257 nm) 42 , which predominantly accesses the S 1 ← S 0 transition, leading to the formation of a fully solvated electron. The phenoxide anion is surface active 43 and serves as a prototypical moiety, participating for example in photo-oxidation of chromophores in the green fluorescent and photoactive yellow proteins 44 . The non-linear response was generated from a variable frequency field, ω 1 ( λ = 620–800 nm), and a fixed frequency field, ω 2 ( λ = 1026 nm), producing ω SFG ( λ = 386–450 nm). Both ω 1 and ω 2 were delayed together with respect to ω pump to allow for time-resolved SFG spectroscopy, which is essential because of the transient nature of e – (aq) . A schematic of the experiment is shown in Fig. 1a with further experimental details in the Methods section.
To obtain a spectrum, ω 1 was scanned and a kinetic trace of the dynamics was measured up to t = 60 ps, at each wavelength. Specific consideration was given to experimental parameters between measurements at different ω 1 to ensure that the relative signals measured were comparable (see Methods). In the limit of weak non-resonant signal from the nascent water/air interface, the measured SFG signal ( I SFG ) depends quadratically on the surface concentration of absorbers when any of the ω 1 , ω 2 , or ω SFG fields are resonant with a transition. As ω 1 was scanned across a range of the absorption spectrum of e – (aq) , resonance-enhancement at the interface may be anticipated as shown in Fig. 1b . UV excitation of phenoxide also produces the phenoxyl radical, PhO • 45 – 47 . The latter has an absorption spectrum peaking at λ = 400 nm (in aqueous solution) corresponding to the C 2 B 1 ← X 2 B 1 transition 48 . This transition coincides with the wavelength range of ω SFG and, therefore, may also appear in the signal through resonance enhancement, as shown in Fig. 1b . | Methods
Experimental
The time-resolved sum-frequency generation spectroscopy arrangement has been detailed in Ref. 64 . The output of an Yb:KWG laser (Light Conversion, Carbide 5, producing 230 fs pulses at 1026 nm with 83 μJ pulse –1 energy at 12 kHz) was split into three parts. One part was used to generate pump pulses, ω pump , at 257 nm (1.3 μJ pulse –1 ) by frequency quadrupling in two successive BBO crystals. The pump was chopped at 6 kHz to enable active pump-on/pump-off subtraction. A second part was used for light field ω 2 ( λ = 1026 nm). A third part was used to pump an optical parametric amplifier (Light Conversion, Orpheus) producing tuneable light ω 1 (620 ≤ λ ≤ 800 nm). Light fields ω 1 and ω 2 were collinearly combined and focussed onto the liquid surface ( f = 20 cm at an angle of incidence of 73 o ). Fields ω 1 and ω 2 were temporally overlapped and delayed relative to ω pump using a motorized delay stage. The resultant field, ω SFG , was separated from ω 1 and ω 2 using a Pelin-Broca prism and sent to an optical Kerr gate, where fluorescence from the sample induced by ω pump was suppressed. The ω SFG was subsequently collected using a photomultiplier tube (Hamamatsu H7732-10), the output of which was electronically gated and discriminated (Advanced Research Instruments F-100TD), and pulses counted on two separate counters for pump-on and pump-off measurements. Count rates were <10 –2 photons/shot and transients are typically collected over 10 6 laser shots/delay. Polarizations of ω 1 , ω 2 , and ω SFG were set to PPP. The pump was also P-polarized.
Specific care was taken to ensure measurements at differing wavelengths were comparable. The resonant signal contribution was normalized to the nonresonant background signal present in each of the pump-off traces such that the only difference between pump-on to the pump-off channels was the presence of the excited species at the interface, which was affected by pump-probe overlap, sample concentration, and pump power. The sample concentration was kept constant between measurements, with an approximate maximum error of 5%. The pump energy also varied no more than 5% within, and between, datasets. The main source of errors are the spatial overlap between the pump and probe pulses and any changes in the divergence of the tunable ω 1 field, leading to changes in the focus at the water/air interface. To minimize this, the overlaps and spot-sizes were independently monitored using a 10-fold digital microscope.
The sample (~75 ml of 150 mM phenol (Sigma Aldrich) in water (18 MΩ cm, Millipore), made to pH 13 using NaOH (Sigma Aldrich) to promote deprotonation) was contained in a rotating (0.5 rev s –1 ) petri-dish and the surface height was kept to ± 14 μm using a home-built liquid-height monitor. The surface coverage of phenoxide was ~7% (see Supplementary Note 4 ).
Computational
Quantum/classical molecular dynamics simulations were performed as detailed in previous work 26 , using the electron–water pseudopotential developed by Turi and Borgis 53 and a periodic slab geometry containing 200 SPC water molecules at normal liquid density. The neat liquid slab was equilibrated at 300 K, following which an electron is introduced to define t = 0. Atomistic dynamics are then propagated (using Ewald summation in a 18.1722 Å × 18.1722 Å × 54.5166 Å unit cell), using a Nosé-Hoover thermostat and a 1 fs time step. The one-electron Schrödinger equation is solved on a real-space grid at every step, to obtain adiabatic forces for molecular dynamics. The grid points span the liquid slab and extend well into the vacuum, with a spacing Δ x = Δ y = 0.947 Å and Δ z = 0.971 Å. These simulation parameters are well-tested for obtaining converged dynamics 26 , 58 . | Results
Figure 2a shows the square-root of the SFG signal, I SFG 1/2 (proportional to interfacial concentration), as a function of time and over a range of ω 1 . Signal before t = 0 has been subtracted, residual fluorescence contributions removed, and traces offset for clarity. At all ω 1 , the SFG signal rises at t = 0 within the instrumental time-resolution ( ~200 fs) and then decays on a longer timescale. However, the decay kinetics are markedly different for differing ω 1 : as ω 1 is changed to higher frequency (shorter wavelength), the traces appear to show an offset in signal at longer times ( t = 60 ps) and a much smaller decaying contribution.
The data in Fig. 2a were analyzed using a global fitting methodology to the total signal, I SFG ( t , λ ) 1/2 . A kinetic model involving two species ( i = A and B) is assumed, whose concentrations have simple first-order kinetics with lifetimes τ i : where * indicates convolution with a Gaussian instrument response function, G ( t ), and c i ( λ ) are amplitudes that correspond to a spectrum which is associated with the decay constant of species i (further details in Supplementary Note 1 ). Figure 2a includes the results of the fit, which accounts for all the observed dynamics with no clear systematic deviations, suggesting that the two-component model and assumption about the kinetics have captured the processes taking place. The two lifetimes obtained are τ A = 12 ± 1 ps and τ B > 100 ps. The decay-associated spectra, c i ( λ ), are shown in Fig. 2b . These data reveal that the spectrum associated with species A, which decays with a lifetime τ A , peaks around λ = 720 nm, whereas that associated with B (decaying with a lifetime τ B > 100 ps) has a low amplitude at longer wavelengths and rises towards shorter wavelengths.
In Fig. 3 , the absorption spectrum of the hydrated electron (at 298 K) 34 is shown along with the spectrum of interfacial species A. The spectrum associated with A has the general appearance of e – (aq) with the peak positions coinciding within the experimental resolution. Hence, we conclude that species A corresponds to e – (aq/air) and the spectrum measured through SFG is comparable to the absorption spectrum of e – (aq) . Also included in Fig. 3 is the absorption spectrum of PhO • in aqueous solution 48 , along with the spectrum of interfacial species B, where we have added the energy of ω 2 to the tunable ω 1 . The agreement shows that ω SFG is resonant with PhO • , also leading to the enhancement of the SFG signal. The absolute intensities of c i ( λ ) are not quantitatively comparable to the absorption spectra for e – (aq) and PhO • , and the latter two have been scaled in Fig. 3 to aid comparison. (The maximal molar extinction coefficients are ε e(aq) = 2.3 × 10 4 M −1 cm −1 and ε PhO• = 3.0 × 10 3 M −1 cm −1 , respectively.) Contributions from phenoxide excited states can be discounted: the S 1 excited state absorption peak is around 515 nm and emission around 340 nm, neither of which are resonant with ω 1 , ω 2 , or ω SFG ; the S 2 excited state has an absorption or emission spectrum that is not known, but it has a sub-picosecond lifetime (leading to e – (aq) and PhO • ).
Spectrum of e – (aq/air)
The decay-associated SFG spectrum of e – (aq/air) resembles the absorption spectrum of e – (aq) , with the peak position being almost identical. This is expected for an electron residing at the interface but with most its electron density within the solvent, akin to water cluster anions with the highest vertical detachment energies 49 . It is also in agreement with previous conclusions from certain second-order non-linear experiments 22 , 42 and with theoretical predictions 50 . If the electron were to reside in an orbital that was partially hydrated, protruding out of the liquid and into the vapor phase, then the overall orbital size would be larger, with a concomitantly smaller p ← s transition energy (red-shifted absorption maximum) 37 , 38 , 49 . While the peak position is similar, the spectrum of e – (aq/air) appears to be narrower on the blue-edge compared to e – (aq) . This may be a consequence of the non-linear spectral response based on the hyperpolarizability, which is fundamentally different to the absorption spectrum (see Supplementary Note 2 ). Alternatively, it may arise because the blue edge of the spectrum is associated with excitation to more diffuse orbitals 13 , which are likely to be perturbed at the interface and raise interesting questions about how the conduction band of water is perturbed at the water/air interface.
We also consider the effect of PhO • that remains following photo-excitation. In the bulk, phenoxide photo-oxidation leaves e – (aq) in close proximity to PhO • and both are formed as a contact pair, [e – :PhO • ] (aq) 45 – 47 . The absorption spectrum for the electron in such a contact pair is virtually identical to that of the free e – (aq) , as demonstrated by previous transient absorption spectroscopy 45 – 47 . The same appears to be true at the water/air interface with the presence of PhO • showing little effect on the e – (aq/air) peak position. The spectrum of the PhO • itself appears to be red-shifted compared to the bulk solution. This is likely a result of the UV excitation wavelength used, which also accesses the second excited state of phenoxide leading to some population of PhO • appearing in an electronically excited state that has an absorption spectrum peaking at λ ≈ 427 nm 47 .
Dynamics of e – (aq/air)
While the spectroscopy and thus the structure of the local solvation environment around the electron is very similar for e – (aq) and e – (aq/air) , the kinetics are clearly not. From transient absorption measurements in the bulk, loss of e – (aq) signal arises from geminate recombination of [e – :PhO • ] (aq) to reform the phenoxide anion, with a fraction also dissociating to form the free e – (aq) and PhO • (yield of e – (aq) ≈ 40% for phenoxide excited at 257 nm) 46 . Critically, loss of e – (aq) signal in the bulk is correlated with loss of PhO • . In contrast, the e – (aq/air) signal at the interface decays with a lifetime τ A = 12 ps, which is at least an order of magnitude faster than the decay of PhO • . Indeed, from the minimal decay in the signal at 620 nm in Fig. 2a , we can exclude geminate recombination as a major decay mechanism, demonstrating the dramatic difference in the overall photochemistry at the interface compared to the bulk. Potential sources of loss of e – (aq/air) through chemical reactions include: scavenging by H 3 O + , but this is in very low concentration in the present experiment; by PhO – , but this seems unlikely as the dianion would not readily form; or by Na + , but this resides below the surface and will not seek to form Na. Therefore, as geminate recombination is the only sub-nanosecond decay mechanism of e – (aq) in the bulk, the differing dynamics observed for e – (aq/air) is likely associated with a physical rather than chemical process.
At the interface the electron can diffuse into the bulk, e – (aq/air) → e – (aq) . In such a scenario, the SFG signal would disappear because e – (aq) would enter a centrosymmetric environment, rendering it insensitive to the second-order non-linear spectroscopic probe. The root-mean-square distance traveled for one-dimensional diffusion can be estimated as z ≈ (2 Dt ) 1⁄2 , where D is the diffusion coefficient. Taking D = 4.9 × 10 –5 cm 2 s –1 for e – (aq) 51 , we find that z = 3.4 Å for the process e – (aq/air) → e – (aq) (with t = 12 ps). This distance is comparable to both the size of e – (aq) (radius of gyration r g = 2.45 Å) and to the distance over which symmetry is broken at the water/air interface (see below and Supplementary Note 2 ) 52 .
To support these observations and to provide deeper insight into the dynamics, we reanalyzed atomistic simulations of e – (aq/air) by Coons et al. 26 , which are based on the one-electron Turi-Borgis model 53 that captures numerous physical properties of e – (aq) 3 , 4 , 53 . Importantly, these include the localization timescale following photochemical generation of e – (aq) 3 , for which the model agrees with all-electron ab initio calculations 54 – 56 , along with the experimental partial molar volume of e – (aq) 57 , which is associated with the excluded volume occupied by e – (aq) . Whereas the computational expense of ab initio calculations limits simulations to typical timescales of ≲ 10 ps, the one-electron model allows us to run multiple trajectories of 20–30 ps.
In these quantum/classical trajectory simulations, a liquid/vacuum interface is modeled using a periodic slab of water. (The vacuum is a good model for ambient air on the timescale of the simulations and experiments, as discussed in Supplementary Note 3 ). Water molecules are described in atomistic detail and the one-electron wave function, ψ e , is computed on a real-space grid 26 , 58 . A diffuse electron is introduced at t = 0, where it is weakly bound to dangling O–H moieties. Figure 4a plots the position of the electron’s centroid along the surface normal ( z GDS ), relative to an instantaneous Gibbs dividing surface that is updated at each step and defines the interface ( z GDS = 0) 26 . Results are shown for ten different trajectories and their average, with snapshots of the electron distribution illustrated at representative points along one trajectory. The diffuse electron density (first snapshot in Fig. 4a ) localizes and becomes solvated at the interface in <1 ps, consistent with sub-picosecond localization of a conduction-band electron introduced into liquid water 54 – 56 , 59 . Although these initial dynamics are not comparable to the photo-oxidation studied here, localization is nevertheless driven by formation of electron–water hydrogen bonds that are already evident within the first 0.5 ps. Subsequent dynamics reflect those associated with e – (aq/air) . After 1 ps, the electron’s size is r g ≈ 2.9 Å, and after 5 ps it has settled to a roughly constant value of r g = 2.5 ± 0.1 Å. Simultaneously, e – (aq/air) begins to diffuse into the bulk phase, with a centroid that hovers near z GDS ≈ –3.0 Å until t ≈ 10 ps, which previous theoretical studies have gauged to be the lifetime of e – (aq/air) 26 , 60 . The value z GDS = –3.0 Å (indicated by a dashed line in Fig. 4a ) is significant insofar as it demarcates the boundary of the interfacial region, where the water density is 99.8% of its bulk value. Up until ~10 ps, where z GDS ≈ –3.0 Å and r g ≈ 2.5 Å, much of the electron distribution remains in a non-centrosymmetric environment (third snapshot in Fig. 4a, b ) and would thus be observable in the SFG experiment. Beyond ~10 ps, the electron migrates further into the bulk with z GDS ≲ –5.0 Å and at this stage, the vast majority of the electron’s probability distribution | ψ e | 2 ( t ) resides in the centrosymmetric bulk region (fourth snapshot in Fig. 4a, b ), where it is no longer be observable in a SFG experiment.
Improvements in ab initio electronic structure software have made many-electron simulations of the hydrated electron more feasible in recent years, albeit over short time scales. Remarkably, these simulations largely validate the detailed predictions of the one-electron Turi-Borgis model 3 , 61 , confirming the veracity of the particle-in-a-cavity model. In the present work, the timescale for the e – (aq/air) → e – (aq) conversion is in excellent agreement with τ A determined by the experiment, suggesting that the simulations have captured the overall process even though they do not contain PhO • nor sodium ions. The simulations additionally underscore the surface sensitivity of the SFG experiment.
Implications
The conclusion that e – (aq/air) is fully solvated, with only a fractional electron density exposed to the vapor phase, brings into question the interpretation of studies suggesting a partially hydrated electron with most of its density protruding into the vapor phase 19 , 21 . Whether this is so has important consequences from a chemical reactivity perspective. A more diffuse density extending into the vapor phase would have different energetics and would lie in an energy range commensurate with electron attachment to molecules including DNA, with the possibility to induce strand cleavage 2 , 19 . It is also interesting to compare our results to photoelectron spectroscopy of e – (aq/air) on liquid microjets, where the photoelectron signal appears to support our conclusion that e – (aq/air) is solvated below the interface, but where the electron is observed to reside at the interface for longer than observed here 20 , 27 – 32 . While the composition between air versus vacuum is of little significance on the timescales of the current experiment and simulations (see Supplementary Note 3 ), the nature of the surface and the probe-depth of the spectroscopic method are important. In the current ambient-condition experiment, the Gibbs dividing surface is well-defined, whilst in the case of a liquid microjet, evaporation from the surface is likely to distort this, as evidenced by non-thermal distributions of evaporated molecules 62 . Additionally, the probe depth for the SFG experiment is on the order of 3 Å, as shown in the current experiment and governed by the asymmetry of the water environment. In photoelectron spectroscopy, the probe depth is dictated by the effective attenuation length of an electron in liquid water, which depends on the energy of the outgoing electron and is at best on the order of a few nm for energies between 10 and 100 eV, although precise values are still debated 63 . In any case, such experiments are not sensitive to the e – (aq/air) → e – (aq) dynamics.
In contrast to the fast internalization dynamics of e – (aq/air) , PhO • remains at the interface for much longer times (>100 ps), suggesting that either the contact pair dissociates very rapidly or else is never formed in the first place. The persistence of PhO • at the water/air interface suggests that it may be reactive with other chemical species in the vapor phase or at the interface. Indeed, a proposed mechanism for chemical rate enhancements observed at the surface of aqueous microdroplets includes the removal of e – (aq/air) , via diffusion to the bulk, as one step 10 . In this model, OH – is ionized by strong interfacial electric fields, leaving reactive OH • at the interface once e – (aq/air) has diffused away. While making no comment on the validity of this proposed mechanism, the diffusive e – (aq/air) → e – (aq) step is consistent with our observations. Viewed more generally, the interface acts as an effective separator for the two reactants, leaving both radicals in distinct environments where they can then potentially undergo further reactions. | Discussion
The optical spectrum of e – (aq/air) is similar to that of e – (aq) , demonstrating that most of the electron density resides within the aqueous phase rather than the vapor phase as suggested in certain previous studies. The implication is that the electron by itself is no more reactive at the interface than in the bulk. While spectroscopically similar, the dynamics of e – (aq/air) differ, as it diffuses rapidly into the bulk leaving behind its molecular parent, which in the present study is the phenoxyl radical. The latter remains at the surface where it could participate in reactivity with vapor-phase species, with potential implications for reactivity in microdroplets and in atmospheric chemistry. More generally, the water/air interface also serves as a general model for a hydrophobic interface, suggesting that the ultrafast radical separation dynamics may be common at many aqueous interfaces. From an experimental viewpoint, the spectral and mechanistic insight gained here were only made possible by directly probing all products at the water/air interface, demonstrating the potential of time-resolved electronic SFG as a method for probing interfacial dynamics, in much the same way that transient absorption spectroscopy has become a workhorse technique to probe bulk dynamics. | The hydrated electron, e – (aq) , has attracted much attention as a central species in radiation chemistry. However, much less is known about e – (aq) at the water/air surface, despite its fundamental role in electron transfer processes at interfaces. Using time-resolved electronic sum-frequency generation spectroscopy, the electronic spectrum of e – (aq) at the water/air interface and its dynamics are measured here, following photo-oxidation of the phenoxide anion. The spectral maximum agrees with that for bulk e – (aq) and shows that the orbital density resides predominantly within the aqueous phase, in agreement with supporting calculations. In contrast, the chemistry of the interfacial hydrated electron differs from that in bulk water, with e – (aq) diffusing into the bulk and leaving the phenoxyl radical at the surface. Our work resolves long-standing questions about e – (aq) at the water/air interface and highlights its potential role in chemistry at the ubiquitous aqueous interface.
Hydrated electrons at the water/air interface participate in natural and synthetic processes, but investigation of their properties remains challenging. Here the authors show that most of their electron density is solvated below the dividing surface and solvates into the bulk in around 10 picoseconds, leaving its phenoxyl radical source at the interface.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41467-023-44441-2.
Acknowledgements
We thank Faith Prichard for her support with parts of the experimental work. This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/R513039/1] (C.J.C.J.) and the U.S. National Science Foundation grant CHE-1955282 (J.M.H.).
Author contributions
J.R.R.V. conceived the overall project. C.J.C.J. and J.R.R.V. conceived the experimental methodology and C.J.C.J. performed the experiments and data analysis. J.M.H. conceived the computational methodology, M.P.C. performed the calculations and both M.P.C. and J.M.H. performed the data analysis. J.R.R.V. wrote the manuscript with input from all authors.
Peer review
Peer review information
Nature Communications thanks Ryan McMullen, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Data availability
The experimental and trajectory data generated in this study have been deposited in the Zenodo database under the accession code 10.5281/zenodo.8005779.
Code availability
The source code used in the trajectory simulations is available in the aforementioned Zenodo database.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:00 | Nat Commun. 2024 Jan 2; 15:182 | oa_package/04/e9/PMC10762076.tar.gz |
|
PMC10763720 | 0 | Conclusions
I hope that my perspective as a scientist who is deeply engaged in discovering and developing medicines, framed in the context of atherosclerosis, highlights the importance of staying grounded in human biology when pursuing disease-related research. While signals emanating from human diversity data will not always be as compelling as those seen with PCSK9 or Lp(a), I believe these examples provide key lessons that are broadly applicable to many human diseases and can help improve our chances of translating scientific discoveries into impactful new medicines. | Developing new and effective medicines is perhaps one of the most important, yet difficult endeavors that we pursue as a society. As a drug developer, I am part of a team wrestling with a daunting task — picking the most promising drug targets from the myriad biological pathways in the human body, designing drug molecules that can be administered to patients to modify these targets in a manner that impacts disease with an acceptable safety profile, and demonstrating the efficacy and value of these medicines in clinical trials. These hurdles contribute to the extremely low success rate of drug development programs progressing from target identification to approval ( 1 ). Increasingly, drug developers recognize that the most important consideration for improving the chance of success is to integrate data on human diversity (genomics, transcriptomics, proteomics, and other forms of molecular and phenotypic data) ( 2 ), from target nomination through all subsequent stages of drug development.
Using human biological insights to inform drug development
The main reason investigational programs fail in clinical testing is lack of efficacy ( 3 ), often because the therapeutic hypothesis was flawed from the outset. In many cases, the target was never a part of a biochemical pathway that is perturbed in the human disease, despite provocative correlations or therapeutic effects in cell and animal models. Even when the target induces a specific mechanistic biological effect, the target biology may differ in humans due to differences in the primary pathway or alternative pathways that circumvent the therapeutic intervention. In cases where research teams are “on the right track” with a sound therapeutic hypothesis and drug candidate that meets key technical criteria, clinical trial participants are genetically and phenotypically diverse, with different environmental exposures, making any potentially observable biological effect prone to substantial variability, with reduced efficacy in some individuals. Given these challenges, we must embark on the drug development journey with the highest possible conviction in 3 things: (a) that a specific target, pathway, or biological process is relevant in the human disease of interest; (b) that we can quantify target engagement by the drug in a manner that relates to the target’s mechanism of action; and (c) that we can identify a patient population in which the therapeutic intervention can demonstrate meaningful clinical benefit. Data on human diversity can be invaluable in informing each of these steps.
Genomes continuously evolve through random variation and selection, leading to the enormous genetic and phenotypic diversity observed in humans. This evolution has been nonlinear and nonuniform, being influenced by selective population growth. Our evolutionary history thus provides a rich resource of human genome variation that can be mined and harnessed for drug discovery. Information from whole-genome sequencing, transcriptomics, proteomics, structural prediction modeling, and detailed phenotypic assessments (including electronic health records), coupled with both hypothesis-free and hypothesis-informed analytical methods, can be utilized to identify pathways linked to phenotypes of interest. This knowledge forms a powerful foundation for drug developers to prioritize targets that are more likely to be safe and effective when pharmacologically interdicted in humans. Information on patient heterogeneity can also help clinical trialists to better identify patient subsets that may preferentially respond to a specific therapeutic intervention.
Targeting PCSK9: a model for integrating human genetics
Drug development for atherosclerotic cardiovascular disease (ASCVD) serves as a powerful example for how pursuing drug development in the context of human biological diversity can improve success rates, and highlights principles that can be applied to all human disease. Despite established interventions that include the statin class of low-density lipoprotein cholesterol–lowering (LDL-C–lowering) agents, incidence of ASCVD and its sequelae (e.g., death, myocardial infarction, heart failure, stroke) continues to increase worldwide ( 4 ), representing a substantial societal burden with an unmet need for new therapies. However, addressing this need in an already-crowded pharmacological landscape requires identification of novel and effective pathways, as well as drug mechanisms that are plausible and scalable. First, a new drug must meaningfully improve patient outcomes (e.g., decrease heart attacks) when added on top of established standard of care in a randomized trial of highly heterogeneous patients. Second, the most effective ASCVD therapies are preventive and taken chronically, making the bar for safety very high, even for high-risk patients. Third, while a therapeutic intervention may elicit an intended physiological effect in humans (e.g., altering a plasma lipid species, decreasing platelet activation, improving endothelial function, decreasing an inflammatory mediator) in early-phase clinical studies, these short-term changes do not necessarily indicate that clinical outcomes will be improved. Moreover, preclinical models of atherosclerosis are principally driven by hypercholesterolemia, are short-term perturbations (weeks to months) compared with the protracted course of human ASCVD (decades), mainly assess arterial plaque burden, and fail to recapitulate the event-driven nature of human disease progression. Given these obstacles, it is simply too risky to pursue drug targets in ASCVD in the contemporary era without being grounded in molecular insights into the relevant aspects of human diversity.
The development of proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitors is perhaps one of the finest examples of how human genetics can catalyze translational research, providing guiding principles for future cardiovascular drug development. In 2003, rare coding variants in PCSK9 , which was then an obscure gene, were reported to cause hypercholesterolemia in humans ( 5 ). Subsequently, the Dallas Heart Study investigators identified presumed loss-of-function variants in PCSK9 that were associated with lower LDL-C and lower incidence of cardiovascular events ( 6 ). While these human genetic observations were the basis for pursuing PCSK9 inhibition for LDL-C lowering and cardiovascular prevention, preclinical studies that were pursued in the context of the human genetics were also part of the landscape for drug discovery. Basic mechanistic studies in cultured cells and mouse models indicated that PCSK9 underwent rapid autoproteolysis into a mature form that triggered endosomal trafficking and intracellular degradation of the LDL receptor, culminating in decreased cell surface receptor density and increased plasma LDL-C concentration ( 7 – 11 ). Structure-function analyses of PCSK9 confirmed that the protective variants were indeed loss-of-function variants with defective synthesis, trafficking, or secretion, while variants associated with hypercholesterolemia featured gain-of-function mechanisms that increased interactions between PCSK9 and LDL receptors ( 7 , 12 ).
These preclinical studies and others, when interpreted in the context of compelling human genetics, impacted translation. First, the genetic and preclinical studies together confirmed that PCSK9 loss of function was protective, supporting inhibition as the therapeutic approach — it is much easier to engineer a drug molecule to potently inhibit a target than to activate it. In addition, the identification of protective loss-of-function variants suggested that inhibiting this target may be generally relevant to myocardial infarction risk and not just to the individuals with gain-of-function mutations. Second, biochemical studies suggested that PCSK9 did not have proteolytic activity against other proteins involved in LDL-C homeostasis such as the LDL receptor, but rather drove rapid intracellular autoproteolysis to produce a mature form of PCSK9. To date, efforts to directly inhibit PCSK9 proteolytic activity through small molecules have been unsuccessful. Other potential druggable steps included the cellular synthesis of PCSK9 and the enigmatic interaction of PCSK9 with the LDL receptor. Eventually, a pioneering class of therapeutic antibodies that neutralized the PCSK9–LDL receptor interaction and reduced plasma LDL-C was successfully developed ( 13 ). That work was supported by mapping and detailed structural resolution of the interface between PCSK9 and the LDL receptor, which explained how those antibodies achieve neutralization (see international patent publication WO 2009/026558; ref. 14 ).
In addition to catalyzing therapeutic molecule discovery, human genetic diversity analyses helped de-risk potential safety concerns with PCSK9 inhibition and provided a degree of conviction in clinical efficacy that was necessary to justify pursuit of large cardiovascular outcomes studies, such as the testing of evolocumab in the FOURIER trial ( 15 ). The human variants confirmed that loss of function of only a single gene was sufficient to confer substantial LDL-C lowering and cardiovascular protection. Subsequent human genetic analyses provided convincing support that the cardioprotective effect of PCSK9 inhibition was through LDL-C lowering ( 16 ) and thereby helped researchers estimate the risk reduction that might occur when PCSK9 inhibitors were added to baseline statin therapy. Importantly, human genetic analyses have not supported the hypothesis that raising the plasma concentration of high-density lipoprotein (HDL) particles would reduce atherosclerotic cardiovascular events ( 17 ), indicating the need to focus on therapies to lower plasma LDL-C and other causal non-HDL cholesterol species. An antibody against murine PCSK9 developed in the course of the evolocumab program elicited a 26% to 36% reduction in total cholesterol in adult mice (though unlike primates, mice carry the majority of their circulating cholesterol in HDL particles) ( 13 ). These results, along with the human genetics, provided the critical impetus to press forward with nonhuman primate studies (where 80% LDL reduction was observed) ( 13 ) and subsequent first-in-human testing of evolocumab. Finally, human genetic analyses supported the long-term safety of PCSK9 inhibition (and the low levels of plasma LDL-C that would be achieved), mitigating concerns of adverse neurocognitive effects, incident diabetes, and other potential liabilities ( 18 ), concerns which have been further alleviated with data from large randomized controlled trials and long-term use of approved PCSK9 inhibitors ( 19 ).
Applying lessons learned from PCSK9 inhibitor development
In retrospect, PCSK9 is an atypical case where the genetic findings provided a compelling path to target validation and made a case for translatability. Most genetic associations will have smaller effect sizes or more obscure targets, requiring more effort for determining translatability. However, the insights gleaned from PCSK9 illustrate the power of this approach and inform how population-scale interrogation of human biological diversity can impact drug development for common diseases.
While aggressive LDL-C lowering is a cornerstone of cardiovascular prevention, many patients with low plasma LDL-C still have cardiovascular events, a term called “residual cardiovascular risk.” Genetic studies have identified several new drug targets that may confer cardiovascular protection in an LDL-C–independent manner ( 20 ). One of the most interesting targets is the circulating lipoprotein “little a” particle [Lp(a)] ( 21 ), which is encoded by the APOA gene and produced by the liver. Lp(a) can be inhibited with small interfering RNA or antisense oligonucleotide drugs that are directed against APOA in hepatocytes, approaches that are being tested in ongoing phase III outcome studies ( 22 ). As orthologs for human Lp(a) are only found in old-world primates, human genetic studies are essential to elucidate any putative pathogenic mechanism and develop drugs to block potential pathological effects. Indeed, human genetic and proteomic analyses may help define optimal plasma Lp(a) cutoffs and other enrichment factors for trial inclusion, select the most appropriate clinical trial endpoints, quantify any dose-response relationship between Lp(a) lowering and reduction in cardiovascular events, de-risk the potential liabilities of very low plasma Lp(a) concentration, and understand the implications of higher mean plasma Lp(a) concentration associated with African ancestry.
In addition to helping identify the best drug targets, human genetics and proteomics have enormous potential to enhance the design and conduct of clinical trials. As the standard of care for preventing atherosclerotic events continues to improve, demonstrating large effects on absolute risk reduction in a broad population using standard clinical criteria for trial enrollment will be increasingly difficult. While a single common sequence variant is typically associated with only a modest incremental increase in the risk of disease, genetic risk scores based on the aggregate effect of many common variants can be predictors of incident cardiovascular events that may provide discriminatory information beyond traditional clinical risk factors in certain populations ( 23 , 24 ). Therefore, genetic risk scores may be used to enrich clinical trials for participants who are more likely to have cardiovascular events during the trial timeframe, thereby magnifying the demonstrable efficacy of a drug and potentially allowing trials to be smaller, faster, and cheaper ( 25 ). In some cases, genetic risk scores may also be able to enrich a trial with patients who preferentially benefit from a specific intervention ( 26 ).
Beyond genetics
A limitation of genetic risk scores is that the germline genome sequence is static. By contrast, human disease pathogenesis involves dynamic interaction between the environment and the output of the genome (e.g., RNA and proteins). Furthermore, while the germline genome may help assess the lifetime risk of a disease, it may not robustly influence disease progression or predict response to therapy. By comparison, the plasma proteome, which is under the influence of genetic variation, changes in response to environmental factors such as stress, injury, aging, food intake, disease progression, and drug exposure. The plasma proteome may therefore help capture the biology of gene-environment interactions. High-throughput proteomic technologies that allow for simultaneous measurement of thousands of proteins in the plasma using affinity-based methods are a powerful complement to genome sequencing. When applied at population scale, proteomics can provide information regarding how sequence variants impact human traits and diseases, identify plasma proteins that can serve as biomarkers for target engagement or patient selection, and in some cases identify drug targets that would not necessarily emerge from germline genetic analyses alone. Applying artificial intelligence–based analytical methods may enhance the ability to extract insights from these vast amounts of genetic, transcriptomic, proteomic, and clinical data. For example, deCODE Genetics has recently developed an artificial intelligence model based on plasma proteomics to predict what is left of life ( 27 ) and has also validated a proteomic risk score for prediction of atherosclerotic cardiovascular events that has superior performance to genetic risk scores ( 28 ). Given the dynamic nature of the proteome, such proteomic risk scores could change with drug treatment, potentially providing an early surrogate for cardiovascular events prior to the completion of a large, protracted, and expensive outcome study. These proteomics-based patient enrichment strategies may also enhance our ability to demonstrate outsized efficacy in clinical trials and observe meaningful treatment effects at earlier time points. | I thank Kári Stefánsson, Narimon Honarpour, Raymond Deshaies, and Simon Jackson for engaging in thoughtful discussion and providing constructive input.
01/16/2024
Electronic publication | CC BY | no | 2024-01-16 23:40:16 | J Clin Invest.; 134(2):e178332 | oa_package/27/b1/PMC10763720.tar.gz |
|||||
PMC10763722 | 37962957 | Introduction
The giant myofilament titin is the third most abundant sarcomeric protein besides actin and myosin ( 1 ). A single titin molecule spans half of the sarcomere from the Z-disk to the M-line ( 2 ). Titin’s main function is to provide passive stiffness to striated muscle ( 3 ), but it also plays a prominent role in sarcomere development and assembly ( 4 ). Recent next-generation sequencing (NGS) data have shown that mutations in the TTN gene, which encodes titin, are associated with skeletal and cardiac myopathies ( 5 – 7 ). Among these, the most prevalent is dilated cardiomyopathy (DCM) ( 6 ). DCM is characterized by ventricular and atrial enlargement and reduced ventricular systolic function ( 8 ). The leading genetic cause of DCM is heterozygous (HET) truncating variant mutations in the TTN gene (TTNtvs), which account for approximately 15%–25% of familial cases ( 6 , 9 , 10 ). Most of the TTNtvs are nonsense and frameshift mutations, but splicing and copy number mutations occur as well ( 6 , 11 ). TTNtvs are overrepresented in the A-band section of titin ( 6 , 9 ) that consists of constitutively expressed exons. This section is thought to be functionally inextensible and acts as a molecular ruler for thick filament assembly ( 12 ). Interestingly, TTNtv mutations also appear in approximately 1% of the healthy population, mainly in nonconstitutive exons of the I-band, indicating a profuse splicing mechanism in titin’s I-band section ( 9 ).
The molecular mechanisms underlying TTNtv-induced DCM are still highly controversial. Multiple pathways have been proposed, including haploinsufficiency, the poison peptide mechanism, and perturbation of cardiac metabolism ( 10 , 13 , 14 ). Moreover, it has been suggested that the TTNtv mutation itself is often insufficient to induce a phenotype, but an additional factor, such as a second gene mutation, pregnancy, or an environmental stressor (e.g., alcohol, hypertonia, chemotherapy), is required to evoke clinical manifestation of the disease ( 15 – 21 ). Recent studies by Fomin et al. ( 22 ) and McAfee et al. ( 23 ) demonstrated, for the first time, the presence of truncated titin proteins in human cardiac samples of dilated cardiomyopathy as well as a reduction in total full-length titin. Furthermore, Fomin et al. showed that the truncated protein is not incorporated into the sarcomere but is accumulated in intracellular aggregates that act as toxic agents and impair protein quality control, supporting the poison peptide mechanism ( 22 ). By contrast, McAfee et al. suggested possible sarcomeric integration of the truncated protein ( 23 ). Furthermore, experiments on human induced pluripotent stem cell–derived cardiomyocytes (hiPSC-CMs) containing TTNtv showed impaired contractility relative to healthy controls ( 13 , 24 ). Overall, these findings suggest that the truncated protein might be integrated into the sarcomere, supporting the existence of a poison peptide mechanism. However, the sarcomeric presence and arrangement of truncated titin has not yet been detected directly in the myocardium of patients with DCM.
Here, we investigated titin truncating variant mutations in a cohort of 127 patients with DCM. We analyzed the gene sequence of the samples to identify the location of titin truncation and investigated the protein expression profiles to reveal the corresponding protein products. The sarcomeric arrangement of truncated titin was explored with super-resolution microscopy on myocardial samples labeled with sequence-specific antibodies and exposed to mechanical stretch. We found that truncated titin is structurally integrated in the sarcomere and causes small, albeit probably functionally important, structural disturbances that are the possible contributors to the pathway toward DCM. | Methods
Sample collection and handling.
Human myocardial tissue samples were obtained from the Transplantation Biobank of the Heart and Vascular Center at Semmelweis University in Budapest, Hungary. Myocardial septum samples were collected from 127 patients with clinically identified end-stage DCM, who were undergoing orthotopic heart transplantation (HTx). The samples were surgically dissected from the explanted, diseased hearts of the recipients. The septum samples were immediately snap-frozen in liquid nitrogen under sterile conditions and stored at –80°C for further measurements and analyses. Echocardiographic data recorded prior to surgery were obtained from our Transplantation Biobank database. The sample ID numbers shown in the images are the patients’ ID numbers from the Heart Transplantation registry. As a non-DCM negative control, left ventricular papillary muscle samples were collected in 3 separate open-heart surgeries. Given the extremely small size of these samples, they were used only for structural (STED microscopy) measurements.
Gene sequencing.
Genomic DNA was isolated from 25 mg frozen septum samples using the QIAamp DNA Mini Kit (QIAGEN) according to the manufacturer’s recommendation. NGS of the purified DNA (50 ng) was performed using the Illumina TruSight Cardio library preparation kit ( 25 ), with the libraries sequenced on an Illumina MiSeq instrument. Quality control of the raw fastq files was performed with the FastQC (version 0.11.9) and MultiQC (version 1.9) algorithms ( 37 , 38 ). Further bioinformatic analyses were carried out with the Broad Institute’s Genome Analysis Toolkit (GATK) Best Practices of Germline Short Variant Discovery ( 26 ). Sequence alignment to the hg19 genome version was performed using the Burrows-Wheeler Aligner (BWA) (version 0.7.17), then the mapped reads underwent duplicate marking (GATK-MarkDuplicatesSpark tool) and Base Quality Score Recalibration (GATK-BQSR tool) ( 39 ). The variant calling step was performed for every sample (GATK-Haplotype Caller). The GVCF files produced for every analyzed sample were consolidated into a single file (GATK-GenomicsDBImport) in order to perform joint genotyping (GATK-GenotypeGVCF). This cohort-wide view facilitated highly sensitive detection of variants even at difficult genomic sites. Final filtering was performed by hardfiltering with the following parameters: MIN_QD=2MAX_FS=60. The filtered variants were annotated with dbSNP (version human_9606_b151), COSMIC (version 92), Ensembl-Variant Effect Predictor, and ClinVar (version 2020.07.17) databases.
Protein solubilization.
Pieces of the myocardial septum samples (10–15 mg) were homogenized in glass Kontes Dounce tissue grinders under liquid nitrogen. After 20 minutes of incubation at –20°C, the samples were solubilized at 60°C for 15 minutes in 50% urea buffer (8 M urea, 2 M thiourea, 50 mM Tris-HCl, 75 mM DTT, 3% SDS, and 0.03% bromophenol blue, pH 6.8) and 50% glycerol containing protease inhibitors (0.04 mM E64, 0.16 mM leupeptin, and 0.2 mM PMSF). All solubilized samples were centrifuged at 16, 000 x g for 5 minutes, aliquoted, flash-frozen in liquid nitrogen, and stored at –80°C ( 40 ).
Titin isoform analysis.
Titin expression levels were determined with 1% SDS–agarose gel electrophoresis ( 41 ), performed at 16 mA/gel for 3.5 hours. Subsequently, the gels were stained overnight with SYPRO Ruby Protein Gel Stain (Thermo Fischer Scientific) and then digitized with a Typhoon laser scanner (Amersham BioSciences). ImageJ (NIH) was used to analyze the OD of the titin bands. The relative titin isoform ratio (N2BA/N2B) was calculated from the integrated band densities. The relative content of full-length titin (T1) that included N2BA and N2B was normalized to MyHC. T2 (titin’s proteolytic degradation product) was normalized to T1. Truncated proteins detected on the gels were normalized to T1.
Detection of truncated titin products.
To determine whether the suspected band visible on the gel was indeed a truncated titin product, we performed Western blot analysis. The samples were separated on a 0.8% SDS-agarose gel and then transferred onto a PVDF membrane (Hybond-LFP, Amersham BioSciences) using a semi-dry blotter (Trans-Blot Cell, Bio-Rad). To differentiate the truncated product from T2, we used 2 antibodies, each detecting the terminal regions of titin. The blots were probed with anti-T12 (binds near titin’s N-terminus, provided by Dieter O. Fürst, University of Bonn, Bonn, Germany; dilution 1:1,000) ( 29 ) and anti-M8M10 (TTN-9, Myomedix, dilution 1:1,000; see Supplemental Information) primary antibodies overnight at 4°C, followed by secondary CyDye-conjugated antibodies (Amersham BioSciences). Subsequently, the blots were digitized with a Typhoon laser scanner. Relative expression levels of the proteins were analyzed using ImageJ.
Preparation of myofibril suspension.
The myofibril suspension was prepared as previously described ( 42 ). Briefly, 2 mL ice-cold permeabilization solution (10 mM Tris [pH 7.1], 132 mM NaCl, 5 mM KCl, 1 mM MgCl 2 , 5 mM EGTA, 5 mM DTT, 10 mM NaN 3 , 20 mM 2,3-butanedione-monoxime [BDM], and 1% Triton X-100) containing protease inhibitors (0.04 mM E64, 0.16 mM leupeptin, and 0.2 mM PMSF) was added to previously frozen small pieces of septum samples (total weight, 15 mg). Next, the samples were incubated on a 360° rotating shaker for 3 hours at 4°C. The permeabilized samples were subsequently rinsed in washing solution (same as permeabilization solution, but without Triton X-100 and BDM) for 15 minutes. To prepare the myofibril suspension, the samples were transferred into 1 mL ice-cold fresh washing solution and homogenized with an MT-30K Handheld Homogenizer for 10–15 seconds at 27,000 rpm (Hangzhou Miu Instruments). The myofibrils were pelleted by applying low-speed centrifugation followed by resuspension and washing. The centrifugation and washing steps were performed at least twice. Subsequently, the pellet was solubilized in 50% urea buffer and 50% glycerol with inhibitors (1:1) and was incubated for 15 minutes at 60°C. Additional 1:3 and 1:10 dilutions of the myofibril suspension/urea buffer solution were prepared. The solubilized myofibril samples were separated on 1% SDS–agarose gels.
Super-resolution microscopy.
Pieces of flash-frozen DCM TTNtv– ( n = 3) and DCM TTNtv+ ( n = 3) left ventricular cardiac muscle were dissected in relaxing solution (40 mM BES, 10 mM EGTA, 6.56 mM MgCl 2 , 5.88 mM Na-ATP, 1 mM DTT, 46.35 mM K-propionate, and 15 mM creatine phosphate, pH 7.0) containing 1% (w/v) Triton X-100 and protease inhibitors (0.1 mM E64, 0.47 mM leupeptin, and 0.25 mM PMSF). The dissected cardiac muscle pieces were skinned overnight at 4°C in relaxing solution containing 1% Triton X-100 and protease inhibitors, washed thoroughly for at least 5 hours with relaxing solution, and then used immediately for experiments. Myofibril bundles were prepared and stretched from the slack length to different degrees (~40%–70%) and then fixed with 4% (v/v) formaldehyde diluted with phosphate buffer at neutral pH. Fixed bundles were embedded in OCT compound and frozen immediately in 2-methylbutane precooled by liquid nitrogen. Cryosections (8 μm thick) were then cut with a Microm Cryo Star HM 560 cryostat (Thermo Fisher Scientific) and mounted onto microscope slides (Superfrost UltraPlus, Thermo Fisher Scientific). Tissue sections were permeabilized in 0.2% Triton X-100 in PBS for 20 minutes at room temperature, blocked with 2% BSA and 1% normal donkey serum in PBS for 1 hour at 4°C, and incubated overnight at 4°C with primary antibodies and phalloidin diluted in blocking solution. The primary antibodies included rabbit polyclonal anti–titin MIR (TTN-7; www.myomedix.com, 0.4 μg/mL, 1:300 dilution), rabbit polyclonal anti–titin A170 (TTN-8; www.myomedix.com, 0.7 μg/mL, 1:250 dilution), and Alexa Fluor 488–conjugated phalloidin (Invitrogen, Thermo Fisher Scientific, A-12379, 66 μmol/L, 1:500 dilution). Sections were then washed twice for 30 minutes with PBS and incubated with Abberior STAR580 goat anti–rabbit IgG (Abberior) (1:250 dilution) and Alexa Fluor 488–conjugated phalloidin (Invitrogen, Thermo Fisher Scientific, A-12379, 66 μmol/L, 1:500 dilution). The sections were then washed twice for 15 minutes with PBS and covered with number 1.5H high precision coverslips (Marienfeld Superior) using ProLong Diamond (Thermo Fisher Scientific) for 24 hours to harden. STED microscopy was performed using an Abberior Expert Line microscope (Abberior). For the excitation of conjugated phalloidin and titin epitopes, 488 nm and 560 nm laser illumination sources were used, respectively. For STED imaging of the different titin epitopes labeled with STAR580 dye, a 775 nm depletion laser was utilized. Images were acquired with a Nikon CFI PL APO 100× (NA = 1.45) oil immersion objective coupled with avalanche photodiode detectors with spectral detection capabilities. Deconvolution of the recorded STED images was performed with Huygens Professional software (SVI) using the theoretical point spread function (PSF) of the imaging objective. Fluorescence intensity plot profiles were generated with Fiji (based on ImageJ, version 1.52). Plot profiles were fitted with Gaussian curves to determine the epitope peak position, height, and full width at half maximum (FWHM) using Fityk 1.3.0 software. A-band width was determined from the MIR epitope positions across the Z-disk. Fluorescence intensity normalization of signals of the A170 titin epitope was performed on carefully preselected plot profiles. For this purpose, background intensity–corrected plot profiles were collected, with averaging across a thick line to compensate for labeling inhomogeneity along the epitope lines. Plot profiles were discarded in case the intensity fluctuation of either the MIR or the A170 epitopes across the M-line exceeded 20% of the average intensity of the respective epitope. Fluorescence intensity was calculated from the peak height of the fitted Gaussians.
Statistics.
Statistical analysis was performed with GraphPad Prism 8 (GraphPad Software). Continuous-variable statistical data are shown as the mean ± SEM unless stated otherwise. Differences between groups were considered to be statistically significant at a probability value of P < 0.05. A 2-tailed Student’s t test was used when comparing the means of 2 groups, and Welch’s correction was applied in the case of unequal variances between the 2 groups. Multiple comparisons were performed with ANOVA. Normality of statistical distribution was checked with the Shapiro-Wilk test. A Mann-Whitney U , Kruskal-Wallis, or ANOVA test was used for statistical comparisons of data for which normal distribution could not be assumed. In order to increase the statistical power of the tests, an equal or close-to-equal sample size was applied within independent groups. Linear regression analysis was performed to fit and compare sarcomere length–dependent epitope distance and fluorescence intensity data obtained from super-resolution microscopy. Sample randomization as well as blinding of the investigator were applied for analysis of the super-resolution microscopy images. For detailed statistical information, see the Supplemental Tables 1–16 .
Study approval.
The sample collection procedure and experiments were reviewed and approved by institutional and national ethics committees (Semmelweis University Regional and Institutional Committee of Science and Research Ethics, Budapest 1091, Hungary; permission nos. ETT TUKEB 7891/2012/EKU [119/ PI/12.], TUKEB 73/2005, and IV/10161-1/2020/EKU). Written informed consent was obtained from the patients prior to sample collection, in accordance with the Declaration of Helsinki.
Data availability.
All data presented in this work are available in the associated Supplemental Supporting Data Values file. The NGS data have been deposited in the European Genome-Phenome Archive (EGA) ( https://ega-archive.org ) under the title “Next-generation sequencing on cardiac samples in Hungarian patients of dilated cardiomyopathy” (accession no. EGA50000000043). | Results
Patient data and gene sequencing: identification of a TTNtv DCM cohort.
We screened 127 myocardial samples from explanted hearts of patients who were clinically identified as having DCM ( Table 1 , Supplemental Table 1 , and Supplemental Table 2 ); supplemental material available online with this article; https://doi.org/10.1172/JCI169753DS1 ) for potentially pathogenic genetic variants using targeted exome sequencing (NGS). The DNA libraries were prepared so as to identify variants in 174 genes associated with inherited cardiac conditions ( 25 ). Fifty of the 174 genes are associated with DCM, whereas the remaining genes are implicated in inherited arrhythmias, other cardiomyopathies, aortopathies, and familial hypercholesterolemia, respectively. We identified 35,635 variants using the Genome Analysis Toolkit (GATK) pipeline ( 26 ), from which 13,815 were found in DCM-associated genes and 4,428 in the titin (TTN) gene ( Supplemental Table 3 ). The variants were annotated by comparing with the Single Nucleotide Polymorphism Database (dbSNP), the Catalogue of Somatic Mutations in Cancer (COSMIC) database, and the ClinVar database. Based on the annotations and newly identified frameshift and nonsense variants, we found potentially pathogenic heterozygous mutations in 44 samples ( Supplemental Table 3 ). In 35 samples (27.5%), the mutations were in DCM genes. We identified 19 TTNtv (15%), 4 lamin A (LMNA), 4 desmoplakin (DSP), 2 BAG cochaperone 3 (BAG3), and 1 each of the fukutin (FKTN), laminin Subunit Alpha 2 (LAMA2), myosin-binding protein C3 (MYBPC3), alpha heavy chain subunit of cardiac myosin (MYH6), beta heavy chain subunit of cardiac myosin (MYH7), phospholamban (PLN), RNA binding motif protein 20 (RBM20), and troponin I3 (TNNI3) variants ( Supplemental Tables 3 and 4 ). Of the 19 pathogenic, likely pathogenic, and new heterozygous TTNtv mutations, we found that 8 were frameshift and 11 were nonsense mutations ( Supplemental Table 4 ). All variants were located in the constitutively expressed I/A junction and the A- and M-band regions of titin ( Figure 1 and Supplemental Table 5 ). In 2 TTNtv samples, we identified potentially pathogenic mutations in the Raf-1 Proto-Oncogene (RAF1) and transient receptor potential cation channel subfamily M member 4 (TRPM4) genes ( Supplemental Table 4 ). On the basis of the NGS data, we divided our samples into DCM samples with (DCM TTNtv+ , n = 19) or without (DCM TTNtv– , n = 108) titin truncation. We evaluated the echocardiographic data on the patients recorded prior to the transplantation to examine any TTNtv-associated phenotypes ( Table 1 and Supplemental Table 2 ). Although more men had TTNtv mutations, similar to the finding of others ( 6 ), we did not find any genotype-associated phenotype severity in our DCM population. None of the functional parameters showed differences between the 2 groups.
Protein analysis detects TTNtv subspecies and perturbed stoichiometries.
Next, we analyzed the titin expression profiles of the myocardial samples with high-resolution gel electrophoresis ( Figure 2 ). The N2BA/N2B titin isoform ratio was elevated in both the DCM TTNtv– and DCM TTNtv+ groups compared with the physiological ratios obtained from data in the literature ( 27 ) ( Figure 2B ). The full-length titin to myosin heavy-chain ratio (T1/MyHC) was significantly decreased in the DCM TTNtv+ group ( Figure 2C ). The ratio of the T2 fragment (calpain-dependent proteolytic fragment that encompasses titin’s A-band section and a 100–200 kDa portion of its distal I-band section ( 28 )) to full-length titin (T2/T1) was significantly increased in the DCM TTNtv+ samples ( Figure 2D ), suggesting that proteolytic activity may be increased in the DCM TTNtv+ myocardial tissue. Nevertheless, the (T1 + T2)/MyHC ratio was also significantly reduced in the DCM TTNtv+ samples ( Figure 2F ). We detected additional protein bands in the DCM TTNtv+ group ( Figure 2A and Supplemental Figure 1 ), albeit not in all TTNtv + samples. These proteins were observed on the gels at the most probable molecular weights calculated for the respective truncated titins from gene-sequencing data ( Supplemental Table 5 ). Accordingly, we identified these bands as the protein products of the truncated titin genes. The average relative expression of the truncated proteins to full-length titin (T1) was 0.19. Notably, upon adding the truncated protein quantity to the respective T1 ( Figure 2E ), we observed no significant difference between the DCM TTNtv+ and DCM TTNtv– samples. Furthermore, the integrated titin quantities (T1 + T2 + truncated titin) normalized to MyHC were essentially identical in DCM TTNtv+ and DCM TTNtv– ( Figure 2G and Supplemental Table 6 ).
To experimentally test whether the additional protein bands were indeed truncated titins rather than the product of proteolysis (such as T2), we carried out Western blot analysis using sequence-specific antibodies targeting the C- and N-terminal regions of titin. The T12 antibody, which binds toward titin’s N-terminus ( 29 , 30 ), labeled the additional protein bands but not T2 ( Figure 3A and Supplemental Figure 1 ). By contrast, the M8M10 antibody, which binds near titin’s C-terminus ( 2 , 31 ), labeled T2, but not the additional protein bands ( Figure 3, B and C , and Supplemental Figure 1 ). Thus, we could differentiate the additional protein bands from T2, demonstrating that they contained titin’s N-terminal region and proving that they indeed corresponded to the protein products of the truncated titin genes.
To investigate whether the truncated titins were incorporated into the sarcomere rather than present in the bulk of the sarcoplasm, we performed protein analysis of washed DCM TTNtv+ myofibrils ( Figure 4 and Supplemental Figure 2 , A–F). We were able to detect protein bands corresponding to the respective truncated titins in the gel electrophoretograms of washed DCM TTNtv+ myofibrils. Furthermore, the supernatants of the washed myofibril samples were devoid of truncated titin ( Supplemental Figure 2B ). Thus, we conclude that the truncated titin protein was incorporated into the cardiac muscle sarcomere.
Super-resolution microscopy detects altered A/I junction and M-line widths.
To explore whether and how truncated titin is structurally integrated into the sarcomere, we analyzed cardiac muscle samples labeled with sequence-specific anti-titin antibodies (MIR and A170, see Figure 1 ), using super-resolution stimulated emission depletion (STED) microscopy ( Figure 5 ). We were unable to discern gross structural changes in the DCM TTNtv+ sarcomeres ( Figure 5A ) with respect to the DCM TTNtv– ( Figure 5B ) and negative control samples ( Figure 5C ). The A170 doublet (separated by ~140 nm) could be resolved in all groups with STED ( Figure 6, A–C ), but not with confocal microscopy ( Figure 6C bottom). Neither epitope doubling ( Figure 6, A and C ), nor significant intensity differences (data not shown) were found in the case of the MIR epitope which is present in both the full-length and truncated titin.
To investigate the structural integrity of the sarcomere-incorporated truncated titin, we carried out measurements on myocardial samples exposed to mechanical stretch. Epitope-to-epitope distance measurements revealed that the A-band titin length, measured as the distance between 2 consecutive MIR epitopes separated by an A170 epitope doublet ( Figure 6A ), increased in both DCM groups ( Figure 6, D and E ) and in the negative control ( Figure 6F ), as the fibers were passively stretched. Regression analysis of the A-band titin length in the 1.8–2.6 μm sarcomere length range revealed that, while the slopes were similar in the negative control and the DCM TTNtv– sample ( Figure 6F ), the slope was significantly ( P < 0.0001) reduced in the DCM TTNtv+ samples ( Figure 6E ). The MIR epitope was shifted toward the Z-disk at slack sarcomere length (1.8 μm) in the DCM TTNtv+ samples with respect to the DCM TTNtv– samples ( Figure 6E ), although not as much as in the negative ( Figure 6F ). The MIR epitope position was less responsive to longitudinal stretch, as indicated by the significantly ( P < 0.0001) lower A-band titin length values measured at longer sarcomere values.
Measuring the distance between consecutive A170 epitopes allowed us to investigate the structural response of the titin kinase (TK) region to mechanical stretch ( Figure 6, G–I ). The TK region localized in the bare zone of the A-band approximately 70 nm from the M-line in both DCM TTNtv– and DCM TTNtv+ sarcomeres, calculated as the half value of the distance of 2 consecutive A170 epitopes. In DCM TTNtv– , the M-line to TK distance increased by approximately 20 nm upon an increase in the sarcomere length from 1.8 to 2.6 μm ( Figure 6H ). By contrast, in the negative control, the M-line to TK distance remained essentially constant across this sarcomere length range ( Figure 6I ). Furthermore, to our surprise, the TK moved closer to the M-line in DCM TTNtv+ sarcomeres when the fibers were passively stretched, as indicated by the negative slope of the M-line to TK versus the sarcomere length function ( Figure 6H ).
To gain further insight into the possible arrangement of truncated titin in the sarcomere, we carried out measurements of epitope widths and intensities ( Figure 7 ). The mean MIR epitope width was largest in DCM TTNtv+ sarcomeres in comparison with DCM TTNtv- and the negative control and significantly greater than in DCM TTNtv- samples ( Figure 7A and Supplemental Table 11 ). Furthermore, the spread of the MIR width data points increased upon increasing the sarcomere length from 1.7 to 1.85 μm, then it declined upon further longitudinal sarcomere stretch ( Supplemental Figure 4A ). The relative intensity of the A170 epitope, normalized to the intensity of the MIR epitope of the same sarcomere, was significantly decreased in DCM TTNtv+ muscles (0.1765) compared to that in DCM TTNtv- (0.2646), which is consistent with the missing epitope in truncated titin (compare Figure 6, A and C ). The ratio of these intensities is 0.667, which is in good agreement with the proteomic ratio of the expressed truncated and full-length titins in the respective DCM TTNtv+ sample ( Figure 7B ). Notably, the normalized A170 intensity was also reduced in the negative control samples with respect to DCM TTNtv- samples ( Figure 6B and Figure 7B ), which we attribute to variations in labeling efficiency. Therefore, we interpret the A170/MIR intensity ratios with caution. The A170 epitope width was significantly ( P < 0.0001) greater in the DCM TTNtv+ and control sarcomeres than in the DCM TTNtv– samples ( Supplemental Figure 5 ). | Discussion
Heterozygous TTNtvs are the most common genetic cause of familial DCM, accounting for 15%–25% of the cases ( 6 , 9 , 10 ). The pathomechanism by which titin mutations induce the cardiac phenotype are under extensive research ( 32 ). Although haploinsufficiency and a dominant negative effect have recently been suggested, on the basis of proteomics analyses ( 22 , 23 ), the mechanistic links from the truncated titin protein to the sarcomeric structure and function remain highly controversial and debated ( 33 ). In order to dissect the role of titin in the pathogenesis of DCM, we performed NGS, high-resolution protein analysis, and super-resolved immunofluorescence microscopy combined with sarcomere extension on cardiac explant samples from a cohort of 127 patients with clinically diagnosed DCM.
We identified TTNtvs in 15% of our patient cohort, which is in accordance with prior NGS data ( 6 , 9 , 22 ). We uncovered additional, non-titin-related DCM and non-DCM-causing mutations in the samples (see Supplemental Tables 2 and 3 ). Clinical data revealed sex differences, as more men carried the truncating mutations than did women. However, the echocardiographic measurements revealed no differences between TTNtv + versus TTNtv – DCM patients ( Table 1 ). It is important to note that the echocardiographic data were collected just prior to heart transplantation, by which time all of the patients had developed end-stage heart failure. Furthermore, there was a variation in the sample size of the echocardiographic data due to the heterogeneity in clinical profiling. Altogether, there were no substantial functional differences between the TTNtv + and TTNtv – DCM patients, which is in line with the recent study of McAfee et al. ( 23 ) (see also Supplemental Tables 1 and 2 ).
The evaluation of titin expression revealed increased titin N2BA/N2B ratios in all of the DCM samples compared with healthy donor heart data from the literature ( 27 ). We note here that we did not have any nonimplanted donor hearts for comparison and that the papillary muscle samples used as a negative control were so small that we could only use them for STED microscopy but not for electrophoresis. However, we observed no differences between the N2BA/N2B ratios in the 2 DCM groups, suggesting that the more compliant N2BA titin compensated for functional impairment in DCM despite the etiology of the disease ( 27 ). Similar to the findings of Fomin et al. ( 22 ) and McAfee et al. ( 23 ), we found that T1/MyHC was significantly ( P < 0.05) decreased in the DCM TTNtv+ group, supporting the hypothesis that haploinsufficiency indeed contributed to the pathomechanism of TTNtv-induced DCM. In addition, we found significantly ( P < 0.05) increased amounts of T2 in the TTNtv + samples, which points to increased titin turnover related to the ubiquitin proteasome system, the pathogenic role of which has been suggested by Fomin et al. ( 22 ), and which needs to be clarified with further experiments. Importantly, however, we found that the integral titin amount, which included the full-length, truncated, and proteolysed proteins, was comparable in the DCM TTNtv– and DCM TTNtv+ samples ( Figure 2 ). We were able to identify truncated proteins in the majority (11 of the 19) of the TTNtv + samples by gel electrophoresis. The truncated proteins were revealed on the gels at the molecular weight levels expected, based on the NGS data ( Figure 2A and Supplemental Figure 1 , and Supplemental Tables 5 and 6 ). The difference in titin expression in DCM TTNtv– and DCM TTNtv+ samples, calculated as the T1/MyHC ratio ( Figure 2C ), was alleviated if the truncated proteins were taken into account in calculating total titin in the DCM TTNtv+ samples ( Figure 2, E and G ). Because the expressed amount of full-length and truncated titin proteins together was not significantly reduced in DCM TTNtv+ , the specific truncated sections of the titin molecule may have harbored important functionality. Thus, the structural and mechanical consequences of the truncated titin protein must be investigated in detail.
Using Western blot analysis, we were able to establish that the additional protein bands, identified putatively as the protein products of the truncated titin gene, were indeed truncated titins rather than further degradation products of T2 ( Figure 3 and Supplemental Figure 1 , and Supplemental Table 16 ). Although the M8M10 antibody, which targets titin near its C-terminus, labeled T1 and T2 exclusively ( Figure 3B and Supplemental Figure 1 lower panel), the T12 antibody, which targets titin near its N-terminus, labeled T1 and all the additional protein fragments as well ( Figure 3A and Supplemental Figure 1 , upper panel). Thus, the additional protein bands indeed corresponded to the protein products of the truncated titin genes. We note here that we could not detect truncated titins in all of the DCM TTNtv+ samples (8 of 19) and that some low-quantity truncated titin protein could be detected only by Western blotting ( Supplemental Figure 1 and Supplemental Table 16 ). Conceivably, truncated protein was not produced in all cases ( 10 ), or in quantities so low that it remained below the detection threshold of our technique. Moreover, the quantity of the truncated proteins was uneven in spite of similar penetrance of TTNtv. Understanding how low expression of TTNtv leads to disease manifestation requires extensive further research, particularly because we do not know the exact mechanisms that lead to pathology even in cases in which the truncated protein is clearly identified. We speculate that the titin interactome ( 34 ) is sensitive to the partial loss of titin, even in amounts too small to be detected with the current proteomics methods.
To explore whether the truncated titin was incorporated into the sarcomere, we first analyzed the protein composition of washed myofibrils that were devoid of the sarcoplasm. Electrophoretic analysis of skinned and washed DCM TTNtv+ myofibril samples revealed the presence of the respective truncated titin in the myofibrillar fraction ( Figure 4 and Supplemental Figure 2A ) but not in the concentrated supernatant ( Supplemental Figure 2B ). The results support the findings of Fomin et al. and McAfee et al. and suggest a poison peptide mechanism ( 22 , 23 ). Fomin et al. hypothesized that the truncated proteins are accumulated as intracellular aggregates ( 22 ). The study by McAfee et al. revealed TTNtv variants in sarcomere-containing cellular fractions, suggesting that the truncated titin is incorporated into the sarcomere ( 23 ). However, they could not rule out the possibility that the truncated titins are solely present as nonsarcomeric aggregates ( 23 ); therefore, whether the truncated titin molecule is structurally and mechanically integrated into the sarcomere remained a puzzling question.
To uncover the arrangement of truncated titin in the slack and extended sarcomere, we performed STED super-resolution microscopy on negative control, DCM TTNtv– , and DCM TTNtv+ myocardial tissue samples exposed to mechanical stretch and labeled with sequence-specific anti-titin antibodies ( Figures 5 – 7 , Supplemental Figures 3–6 , and Supplemental Tables 11, 13–15 ). It is important to note that, because the truncated titin molecules do not carry epitopes that are unique with respect to the full-length molecule, the immunofluorescence microscopic results provided only indirect evidence of the truncated titin’s sarcomeric behavior. Since both the full-length and truncated titins are likely present in the sarcomere because of the heterozygous nature of TTNtv, truncated titin behavior may be inferred from the number, location, intensity and spatial width of the antibody epitope label signals within the sarcomere. We used 2 anti-titin antibodies to monitor different regions of titin. MIR labels the I/A junction of titin, and A170 is localized at the TK region, at the edge of the bare zone of the A-band. Considering that TTNtv was overrepresented in the A-band region, MIR and A170 labeled all and none of the studied DCM TTNtv+ samples, respectively ( Figure 1 ). Such differentiated labeling allowed us to gain precise insight into the sarcomeric behavior of the truncated titin molecules.
We observed no gross structural disturbance in the DCM TTNtv+ sarcomeres ( Figure 5A ) in comparison with DCM TTNtv– ( Figure 5B ) and negative control ( Figure 5C ) sarcomeres, and we could not detect fluorescence signal in unexpected locations, such as on the surface of the myofibrils or in between the expected epitope locations. In fact, given the overall across-the-sarcomere appearance of both the MIR and A170 epitopes, the myofilaments were in precise registry. Notably, sarcomeric structure was homogenous across the microscopic fields of view, indicating that the truncated titin molecules were distributed homogenously throughout the sample, rather than being confined to distinct sarcomeres that would appear as structural mosaicism. We successfully resolved the A170 epitope doublet, with an average separation distance of approximately 140 nm, owing to the high resolution of STED microscopy, which was tested to be approximately 40 nm in our instrument. It was important to be able to resolve the A170 epitope doublet so as to alleviate confounding of intensity measurements and to uncover the behavior of the TK region. The average intensity of the A170 epitope was reduced in the TTNtv + samples by 23% and 33% with respect to the normal control and TTNtv – samples, respectively ( Figure 7B and Supplemental Table 12 ), whereas that of the MIR epitope remained unchanged, indicating that the truncated titin was indeed incorporated into the sarcomere, and supporting our protein analysis results for washed myofibrils. Why the A170 epitope intensity was smaller in the normal control than in the TTNtv – samples needs further investigation, but it may be associated with tissue specificities (papillary muscle) that affect antibody labeling efficiency. Finally, the lack of MIR epitope doubling suggests that the truncated titin was not only incorporated into the sarcomere but structurally integrated similarly to the full-length form.
The precise epitope localization made possible by STED microscopy allowed us to study the structural rearrangements of titin in the negative control, DCM TTNtv– , and DCM TTNtv+ sarcomeres exposed to a partial functional assay in the form of mechanical stretch ( Figures 6 and 7 , Supplemental Figures 3–6 , and Supplemental Tables 11, 13–15 ). The A-band titin length, measured as the MIR-to-MIR distance ( Figure 6A ) increased in all groups upon passive stretch ( Figure 6, D–F ), which supports earlier notions that the A-band section of titin is genuinely extensible ( 35 ). Interestingly, however, regression analysis revealed a significantly reduced slope in the TTNtv + samples across the 1.8–2.6 μm sarcomere range, which points to a reduced A-band extensibility in the DCM TTNtv+ sarcomere (for statistical comparison, see Supplemental Table 7 ). Notably, the MIR epitope was shifted toward the Z-disk in the DCM TTNtv+ sarcomere at slack (1.8 μm) ( Figure 6E ), suggesting that A-band titin was more extended or, vice versa, that the I-band titin was more contracted on average ( Supplemental Figure 3D ) than in DCM TTNtv– sarcomeres. Notably, the A-band titin width at slack sarcomere length was smaller in both DCM TTNtv– and DCM TTNtv+ sarcomeres than in the negative control sarcomeres ( Figure 6F ), suggesting that titin’s sarcomeric arrangement as affected in DCM, irrespective of the presence of truncation. The differences in sarcomere length–dependent MIR-to-MIR epitope distance behavior were coupled with a significantly increased MIR epitope width in the DCM TTNtv+ sarcomeres with respect to both the normal control and DCM TTNtv– samples ( Figure 7A and Supplemental Table 11 ), indicating that there was a slight disarrangement among the titin molecules in spite of the gross alignment (i.e., there was no MIR epitope doubling). Presumably, the I-band section of the truncated titin molecules had become more contracted, owing to the a priori weaker A-band attachment, which resulted in the widening of the MIR epitope. Notably, the average MIR epitope width in the DCM TTNtv+ sarcomeres decreased with increasing sarcomere length ( Supplemental Figure 4 ), suggesting that the axial titin disarrangement may have been reduced by mechanical stretch.
The response of the TK region to sarcomere stretch was quite different in the DCM TTNtv– versus DCM TTNtv+ and negative control sarcomeres ( Figure 6, H and I ). While the M-line–to–TK distance remained constant in the negative control and increased with sarcomere length in DCM TTNtv– sarcomeres, it progressively decreased in DCM TTNtv+ sarcomeres. The increase in the M-line–to–TK distance in DCM TTNtv– sarcomeres provides direct evidence that the TK indeed responded, probably by in situ partial unfolding ( 36 ), to mechanical stretch. Notably, the extensibility of the TK region in DCM TTNtv+ sarcomeres was more than twice as large as in the entire A-band section of titin: ~20 nm extension/~60 nm initial length ( Figure 6H ) versus ~200 nm extension/~1,400 nm initial length ( Figure 6E ), which points to a differential control of titin elasticity or conformation along the thick filament. The lack of detectable TK extension in the normal control might be attributable to tissue specificity (papillary muscle) that needs to be explored further. The reduction in M-line–to–TK distance in DCM TTNtv+ is puzzling, considering that an apparent contraction in the TK region upon sarcomere stretch was unexpected. The controversial TK region behavior was coupled with a significantly ( P < 0.0001) increased A170 epitope width in comparison with DCM TTNtv– sarcomeres ( Supplemental Figure 5 ), indicating that there was structural disarrangement among the fewer full-length titin molecules in the bare zone of the DCM TTNtv+ sarcomere. We note that the A170 epitope width was even greater in the normal control ( Supplemental Figure 5 ), which may also be due to tissue specificities. It is also notable that we observed a patient-dependent variation in the mean M-line–to–TK distance ( Supplemental Figure 6 , C and D) that showed a positive correlation with the truncated titin/T1 titin ratio ( Supplemental Figure 6E ), suggesting that the ratio of the number of full- versus partial-length titin molecules in the sarcomere had a functional effect on the TK region.
We propose the following model to explain our complex and somewhat puzzling observations ( Figure 8 ). In contrast to the normal sarcomere ( Figure 8A ), in DCM TTNtv+ sarcomeres, different numbers of full-length and truncated titin molecules are integrated into the sarcomere, the ratio of which is controlled by the penetrance of the genetic condition. Because the anchorage of TTNtv in the A-band is compromised, the molecules are pulled slightly toward the Z-line by their intact I-band sections. Therefore, in the slack DCM TTNtv+ sarcomere, the A-band titin length is increased, and, vice versa, the I-band titin length is reduced. However, this disposition (and hence the titin disarrangement) is slight, due probably to a prestretched state of the A-band section of the truncated titin that enhances its binding within the A-band. The MIR epitopes are slightly out of register, resulting in an increase in the STED epitope profile width. Because TTNtv lacks a good portion of its A-band section and its entire M-band section, only about half of the titins contribute to the A170 signal, hence, the A170 epitope intensity is reduced. The reduced number of titins in the bare zone and M-band likely results in structural disarrangement and weakening, leading to an increase in both the M-line–to–A170 distance (by ~10 nm) and the width of the A170 epitope ( Figure 8B ). This pathological disarrangement is indicated in the figure, albeit in an exaggerated way, by a crooked M-band. Upon stretch ( Figure 8, C and D ), the apparent A-band titin length is increased, but to a smaller degree than in the DCM TTNtv– sarcomere, given the prestretched and stabilized nature of the truncated titin molecules. Accordingly, the MIR epitopes on the normal and truncated titin molecules approach each other, resulting in a relative narrowing of the STED intensity profile. The M-line–to–A170 epitope distance becomes reduced upon sarcomere stretch, which is a paradoxical phenomenon due, conceivably, to a mechanically driven ordering in the M-band. In principle, the faulty mechanosensor function of the M-band revealed here may be a major pathway leading to manifest DCM. Although some elements of our proposition, such as the prestretched and stabilized A-band portion of the truncated titin and the structurally disarranged M-band, are hypothetical and need further exploration, the model is consistent with our experimental data and provides testable predictions.
In conclusion, our results provide strong support for the notion that titin truncating variants are a major cause of familial DCM. Truncated titin molecules are incorporated and integrated into the sarcomere and likely cause small but functionally important internal structural and mechanical perturbations. The compensatory effects in the I/A junction and the faulty mechanosensor function in the M-band region of titin probably play a substantial role in the pathway toward DCM. | Authorship note: DK and HT are co–first authors and contributed equally to this work.
Heterozygous (HET) truncating variant mutations in the TTN gene (TTNtvs), encoding the giant titin protein, are the most common genetic cause of dilated cardiomyopathy (DCM). However, the molecular mechanisms by which TTNtv mutations induce DCM are controversial. Here, we studied 127 clinically identified DCM human cardiac samples with next-generation sequencing (NGS), high-resolution gel electrophoresis, Western blot analysis, and super-resolution microscopy in order to dissect the structural and functional consequences of TTNtv mutations. The occurrence of TTNtv was found to be 15% in the DCM cohort. Truncated titin proteins matching, by molecular weight, the gene sequence predictions were detected in the majority of the TTNtv + samples. Full-length titin was reduced in TTNtv + compared with TTNtv – samples. Proteomics analysis of washed myofibrils and stimulated emission depletion (STED) super-resolution microscopy of myocardial sarcomeres labeled with sequence-specific anti-titin antibodies revealed that truncated titin was structurally integrated into the sarcomere. Sarcomere length–dependent anti–titin epitope position, shape, and intensity analyses pointed at possible structural defects in the I/A junction and the M-band of TTNtv + sarcomeres, which probably contribute, possibly via faulty mechanosensor function, to the development of manifest DCM.
Truncated mutants of titin are structurally incorporated in the sarcomere and cause axial disintegration, thereby leading to the pathogenesis of dilated cardiomyopathy. | Author contributions
DK and HT designed research studies, conducted experiments, acquired data, analyzed data, and wrote the manuscript. The order of the co–first authors’ names was determined on the basis of the amount of experiments and analyses conducted. BK conducted STED experiments, acquired and analyzed STED data. GT acquired data and analyzed STED data. PD analyzed STED data. AAS acquired patient information data. MP, IH, and BS provided patient samples. SL provided antibodies. TR acquired patient information data and provided patient samples. AG, GB, and CB acquired and analyzed NGS data. BM designed research studies and provided patient samples. MSZK designed research studies, conducted experiments, acquired data, analyzed data, and wrote the manuscript.
Supplementary Material | We gratefully acknowledge the assistance of Krisztina Lór with experimental preparations (Department of Biophysics and Radiation Biology, Semmelweis University, Budapest, Hungary). We thank András Csillag and Gergely Zachar (Department of Anatomy, Histology, and Embryology of Semmelweis University) for providing access to the Microm HM560 Cryostat. This research was funded by the ÚNKP-19-3-I New National Excellence Program of The Ministry for Innovation and Technology (to DK) and by grants from the Hungarian National Research, Development and Innovation Office (K135360, to MK; FK135462, to BK; K135076, to BM; K134939, to TR; project no. NVKP_16-1–2016-0017 National Heart Program and a 2020-1.1.6-JÖVŐ-2021-00013 grant); the Ministry for Innovation and Technology of Hungary (Thematic Excellence Programme 2020-4.1.1.-TKP2020, within the framework of the Therapeutic Development and Bioimaging thematic programs of Semmelweis University; TKP2021-NVA-15 and TKP2021-EGA-23, implemented through the National Research, Development and Innovation Fund and financed under the TKP2021-NVA and TKP2021-EGA funding schemes, respectively); and the European Union (project no. RRF-2.3.1-21-2022-00003).
11/14/2023
In-Press Preview
01/16/2024
Electronic publication | CC BY | no | 2024-01-16 23:40:16 | J Clin Invest.; 134(2):e169753 | oa_package/dc/49/PMC10763722.tar.gz |
|
PMC10766496 | 38176709 | INTRODUCTION
Peripheral nerve regeneration is a complex time and space‐dependent cellular programming (Jessen & Mirsky, 2019 ). Identifying the mechanisms of nerve regeneration requires an examination of all cellular molecules and signaling pathways at different times and neural tissue locations (Li et al., 2020 ). The techniques of genetic modification (Raivich & Makwana, 2007 ; Schweizer et al., 2002 ), gene/protein expression assays (Funakoshi et al., 1993 ; Jiménez et al., 2005 ), administration of inhibitors or inducers (Chan et al., 2003 ; Kilmer & Carlsen, 1984 ), and concentration/activity assays of biomolecules and ions (Couraud et al., 1983 ; Yan et al., 2010 ) have increased our knowledge about cellular mechanisms of nerve regeneration. However, the multifunctionality and complex interaction of the cellular molecules and signaling pathways make it difficult to predict the output (Chang et al., 2013 ; van Niekerk et al., 2016 ). As a result, there is still ongoing research on the interactions of these components. Iron is an important cofactor for many intracellular enzymes such as DNA polymerases, DNA helicases, nitrogenases, catalases, and peroxidases (Prabhakar, 2022 ). In addition, it is a component of mitochondrial respiratory chain proteins, which are involved in ATP production. Yet, a free, non‐protein‐bound form of iron generates free radicals by interconverting between ferrous (Fe 2+ ) and ferric (Fe 3+ ) forms, which can damage cellular components (Eid et al., 2017 ). Cell death processes such as apoptosis, autophagy, and ferroptosis can be induced by reactive oxygen species (ROS) (Endale et al., 2023 ). Iron overload promotes cell apoptosis by inducing endoplasmic reticulum stress and mitochondrial dysfunction (Schulz, 2011 ). Ferroptosis is a form of cell death that is caused by iron, and it is characterized by intracellular iron accumulation and an increase in lipid peroxidation (Endale et al., 2023 ). In the physiological state, iron toxicity is prevented by iron‐binding proteins such as transferrin (Tf). Iron‐binding proteins participate in iron homeostasis by absorbing, re‐cycling, and storing iron (Eid et al., 2017 ; Schulz, 2011 ). Iron homeostasis impairment is observed in peripheral neuropathies. Iron overload is a common symptom of various neurodegenerative disorders with peripheral neuropathies, such as neuroferritinopathy and Friedreich's ataxia (Barbeito et al., 2009 ; Eid et al., 2017 ; Schröder, 2005 ). On the other hand, iron deficiency is associated with restless leg syndrome and anemia‐induced peripheral neuropathy (Connor et al., 2017 ; Kabakus et al., 2002 ). Iron deficiency during development results in decreased levels of myelin basic protein (MBP) and peripheral myelin protein 22 in rats, which persists even after Fe‐sufficient diet replenishment (Amos‐Kroohs et al., 2019 ). These studies indicate that iron plays an important role in the development of the peripheral nervous system and the occurrence of peripheral neuropathies. Then, the present study aims to evaluate the changes in iron homeostasis during peripheral nerve degeneration and regeneration. This data can help to better understand the role of iron in peripheral nerve regeneration and the initiation/progression of peripheral neuropathies. | CONCLUSION
After PNI, the expression of all proteins involved in iron homeostasis is increased in SCs and axons, which shows a high demand for iron during this period. Based on previous studies, iron homeostasis proteins play a role in SC differentiation, myelination, and axonal outgrowth. However, the intracellular signals inducing the expression of these proteins are yet to be clarified. On the other hand, there is little data about the effects of iron (iron deficiency/excess) on peripheral nerve regeneration, which needs further research. Moreover, the role of iron in the cellular signaling pathways involved in peripheral nerve regeneration remains to be elucidated. | Abstract
Iron accumulates in the neural tissue during peripheral nerve degeneration. Some studies have already been suggested that iron facilitates Wallerian degeneration (WD) events such as Schwann cell de‐differentiation. On the other hand, intracellular iron levels remain elevated during nerve regeneration and gradually decrease. Iron enhances Schwann cell differentiation and axonal outgrowth. Therefore, there seems to be a paradox in the role of iron during nerve degeneration and regeneration. We explain this contradiction by suggesting that the increase in intracellular iron concentration during peripheral nerve degeneration is likely to prepare neural cells for the initiation of regeneration. Changes in iron levels are the result of changes in the expression of iron homeostasis proteins. In this review, we will first discuss the changes in the iron/iron homeostasis protein levels during peripheral nerve degeneration and regeneration and then explain how iron is related to nerve regeneration. This data may help better understand the mechanisms of peripheral nerve repair and find a solution to prevent or slow the progression of peripheral neuropathies.
Iron trafficking between Schwann cell and growth cone during peripheral nerve regeneration.
Bolandghamat , S. , & Behnam‐Rassouli , M. ( 2024 ). Iron role paradox in nerve degeneration and regeneration . Physiological Reports , 12 , e15908 . 10.14814/phy2.15908 38176709 | INTRACELLULAR IRON SIGNALING PATHWAYS IN SCHWANN CELLS (SCS)
Iron, either as free form (ferric ammonium citrate [FAC] only at concentrations of 0.5 and 0.65 mM) or holo‐Tf (iron‐bound Tf) induces an increase in cyclic adenosine monophosphate (cAMP), phosphorylated (p)‐ cAMP‐response element binding protein (CREB), reactive oxygen species, MBP, and myelin protein zero (P0) levels in serum‐deprived SCs (Figure 1 ) (Salis et al., 2012 ). The addition of either deferoxamine (an iron chelator), H‐89 (a protein kinase A [PKA] antagonist), or N‐acetylcysteine (a powerful antioxidant) prevents these effects of iron/hTf, indicating the role of cAMP/PKA/CREB pathway and reactive oxygen species in the pro‐differentiating effect of iron (Salis et al., 2012 ). However, H‐89 has no effect on iron/hTf‐induced P0 levels in serum‐deprived SCs, which means iron increases P0 expression through a PKA‐independent pathway (Salis et al., 2012 ). These effects are not observed with iron concentrations below or above 0.5–0.65 mM (Salis et al., 2012 ). In the cAMP/PKA/CREB signaling pathway, cyclic AMP binds and activates PKA which then phosphorylates the transcription factor CREB (Sassone‐Corsi, 2012 ; Shaywitz & Greenberg, 1999 ). The phosphorylated CREB, along with its coactivators, binds to cAMP‐response elements (CREs) in the gene promoter and activates gene transcription (Chrivia et al., 1993 ; Kwok et al., 1994 ; Lundblad et al., 1995 ; Shaywitz & Greenberg, 1999 ). It was found that treatment of SCs with cAMP increases intracellular labile Fe 2+ , 5‐hydroxymethylcytosine (5hmC) levels, and transcription of pro‐myelinating genes (Camarena et al., 2017 ). There is a positive correlation between 5hmC levels and gene transcription (Camarena et al., 2017 ; Wu & Zhang, 2017 ). 5‐hydroxymethylcytosine is a DNA demethylation intermediate that regulates gene transcription (Wu & Zhang, 2017 ). It is produced by the activity of ten‐eleven translocation (Tet) methylcytosine dioxygenase utilizing Fe 2+ as a cofactor (Wu & Zhang, 2017 ). The PKA inhibitors have no effect on the cAMP‐induced increase in labile Fe 2+ and 5hmC in SCs. Administration of iron chelators or a V‐ATPase inhibitor (endosomal acidification inhibitor) prevents the effects of cAMP on 5hmC, indicating the role of endosomal iron release for 5hmC generation. It appears that cAMP enhances the function/number of endosomal V‐ATPases which leads to increased endosomal iron release, 5hmC generation, and gene transcription (Figure 1 ) (Camarena et al., 2017 ). In mammalian cells, iron regulates the translation of mRNAs encoding iron importer proteins (Tf receptor 1 [TfR1], divalent metal transporter 1 [DMT1]), iron‐storage protein (ferritin [Fer]), and iron exporter (ferroportin [Fpn]) via iron regulatory proteins (IRP1 and IRP2) (Anderson et al., 2012 ). At a low intracellular iron concentration, IRP1 loses its iron–sulfur (Fe‐S) cluster, binds in association with IRP2, to iron‐responsive elements (IREs) in the 3′ region of TfR1 and DMT1 mRNAs, and prevents mRNA degradation by RNase, while binding of IRP1/2 to 5′ region of Fer and Fpn mRNAs prevents mRNA translation (Anderson et al., 2012 ; Read et al., 2021 ). On the other hand, at a high intracellular iron concentration, IRP1 acts as an aconitase containing the Fe‐S cluster and IRP2 is degraded by the proteasome (Anderson et al., 2012 ; Read et al., 2021 ). Then, in the absence of IRP1/2 binding, TfR1 and DMT1 mRNAs are degraded, while Fer and Fpn mRNAs are translated (Anderson et al., 2012 ; Read et al., 2021 ). However, after nerve injury, iron accumulates in the SCs and coincides with the up‐regulation of TfR1, DMT1, and Fpn (Martinez‐Vivot et al., 2015 ; Raivich et al., 1991 ; Salis et al., 2007 ; Schulz, 2011 ). Iron is also a cofactor for many intracellular proteins and enzymes such as proteins of the mitochondrial respiratory chain (e.g., cytochrome c ) (Cammack et al., 1990 ), intracellular antioxidants (peroxidases and catalase) (Cammack et al., 1990 ), enzymes that are responsible for DNA replication and repair (DNA polymerase, DNA helicase, DNA primase, ribonucleotide reductase, glutamine phosphoribosylpyrophosphate amidotransferase) (Cammack et al., 1990 ; Zhang, 2014 ), proteins involved in translation and post‐translational modification of proteins (e.g., IRP1/2, mitochondrial aconitase, and prolyl and lysyl hydroxylases) (Cammack et al., 1990 ), and enzymes involved in lipid metabolism (e.g., fatty acid desaturase, stearoyl‐CoA desaturase, lipoxygenase, purple acid phosphatase) (Cammack et al., 1990 ; Rockfield et al., 2018 ).
CHANGES IN IRON HOMEOSTASIS PROTEINS AND IRON LEVELS AFTER PERIPHERAL NERVE INJURY (PNI)
Iron homeostasis proteins are expressed at very low levels in the intact nerve. After nerve injury, these proteins are up‐regulated in the lesion site, distal nerve segment, and slightly in a narrow proximal segment neighboring the lesion site (Camborieux et al., 1998 ; Hirata et al., 2000 ; Madore et al., 1999 ; Martinez‐Vivot et al., 2015 ; Raivich et al., 1991 ; Salis et al., 2007 ; Schulz, 2011 ) (Table 1 ). Iron hemostasis proteins that have been studied so far post‐PNI, include transferrin (Tf) (Salis et al., 2007 ), TfR1 (Raivich et al., 1991 ; Salis et al., 2007 ), DMT1 (Martinez‐Vivot et al., 2015 ), Fer (Rosenbluth & Wissig, 1964 ), Fpn (Schulz, 2011 ), ferroxidase ceruloplasmin (Cp) (Schulz, 2011 ), heme oxygenase 1 (HO‐1) (Hirata et al., 2000 ; Hirosawa et al., 2018 ), and hemopexin (Hpx) (Camborieux et al., 1998 ; Madore et al., 1999 ) (Figure 2 ). Since the increased expression of iron homeostasis proteins occurs before macrophage invasion in Wallerian degeneration (WD), it has been suggested that their expression is early regulated by SCs and fibroblasts in the lesion site (Hirosawa et al., 2018 ; Madore et al., 1999 ). During nerve regeneration, the levels of iron homeostasis proteins are progressively decreased and return to normal levels in the intact nerve (Camborieux et al., 1998 ; Hirata et al., 2000 ; Hirosawa et al., 2018 ; Madore et al., 1999 ; Martinez‐Vivot et al., 2015 ; Raivich et al., 1991 ; Rosenbluth & Wissig, 1964 ; Salis et al., 2007 ).
Tf and TfR1
Tf is the main iron‐transport protein in the extracellular fluid. It delivers iron to cells by binding to TfR1 on the cell surface and endocytosis (Schulz, 2011 ). Iron binding to Tf avoids the toxic effect of free iron in the extracellular space (Eisenstein, 2000 ). In the cell, the TfR1 levels are controlled by negative feedback from the intracellular iron concentration (Eisenstein, 2000 ); however, after PNI, this control is lost and Tf/TfR1 levels are increased in the phagocytic SCs and regenerating motor neurons (Raivich et al., 1991 ; Salis et al., 2007 ; Schulz, 2011 ). This event is accompanied by increased endoneurial iron uptake in the lesion site (Raivich et al., 1991 ). The increased Tf levels in SCs and neurons are a result of the increased gene expression and its uptake from the systemic circulation (Raivich et al., 1991 ; Salis et al., 2007 ). In axolotl regenerating axons, Tf is carried via fast anterograde transport and released from the growth cones (Kiffmeyer et al., 1991 ). Tf has a cytoplasmic location in SCs and axons (Lin et al., 1990 ). It is seen more at the nodes of Ranvier of myelinated fibers (Lin et al., 1990 ). Tf is more abundant in a myelinated peripheral nerve than in an unmyelinated peripheral nerve, likely due to its role in myelination (Lin et al., 1990 ). It has been found that ablation of TfR1 reduces embryonic SC proliferation, maturation, and postnatal axonal myelination (Santiago González et al., 2019 ). Moreover, the addition of iron or holo‐Tf to the SCs culture prevents cell de‐differentiation induced by serum withdrawal, as is evidenced by increased expression of markers of SCs differentiation such as MBP and P0 (Salis et al., 2002 , 2012 ). Holo‐Tf induces MBP and P0 expression, respectively, through cAMP/PKA/CREB‐dependent and PKA‐independent signaling pathways, in the serum‐deprived SCs (Salis et al., 2012 ).
DMT1
DMT1 is responsible for the cellular uptake of non‐Tf‐bound iron through the cellular and endosomal membranes (Martinez‐Vivot et al., 2013 , 2015 ). In the peripheral nerve, DMT1 is localized in the plasma membrane of SCs (Martinez‐Vivot et al., 2013 ). In a recent study, after a sciatic nerve crush injury, an increase in DMT1 mRNA and protein levels was observed at the lesion site and distal stump (Martinez‐Vivot et al., 2015 ). However, the other study could not find any change in DMT1 mRNA in the distal nerve after PNI (Schulz, 2011 ). It is supposed that the DMT1 level increase is a result of activated inflammatory processes during WD as is observed in CNS (Martinez‐Vivot et al., 2015 ; Urrutia et al., 2013 ). In differentiated PC12 cells (a model for neuronal differentiation into sympathetic‐neuron‐like cells (Hu et al., 2018 )), DMT1 is responsible for the majority of iron uptake (Mwanjewe et al., 2001 ; Schonfeld et al., 2007 ). Ablation of DMT1 reduces embryonic SC proliferation, maturation, and postnatal myelination (Santiago González et al., 2019 ). Ablation of DMT1 also down‐regulates TfR1 and vice versa (Santiago González et al., 2019 ). After chronic constriction injury of the sciatic nerve, the expression of DMT1 mRNA without iron‐responsive element ([−] IRE mRNA) and DMT1 protein are increased in the spinal cord dorsal horn, with a peak at 7 days post‐injury (Xu et al., 2019 ). Since the intracellular iron levels control the binding of iron regulatory proteins to the IRE and stabilization of the mRNA (Anderson et al., 2012 ), it could be suggested that the increased expression of (−) IRE mRNAs after PNI may be a strategy taken by cells for iron accumulation by escaping from the inhibitory effect of high iron levels on the expression of iron importer proteins.
Fer
Fer is an intracellular iron‐storage protein. It has ferroxidase activity, which converts ferrous into ferric iron to be deposited inside the Fer core (Santiago González et al., 2019 ). There are two types of Fer inside the cell: cytosolic and mitochondrial Fer (Arosio & Levi, 2010 ). Fer expression is controlled by intracellular iron concentration. High iron concentration increases Fer expression (Anderson et al., 2012 ). There are no reports of levels of nerve Fer after PNI. However, considering iron accumulation after PNI (Martinez‐Vivot et al., 2015 ; Raivich et al., 1991 ), it can be thought that Fer levels are increased. Ferric ammonium citrate (FAC)‐induced iron overload in differentiated PC12 cells increases the expression of Fer subunits mRNA (Helgudottir et al., 2019 ). Ablation of Fer reduces embryonic SC proliferation, maturation, and postnatal myelination. These defects are more severe in Fer knockout mice than in TfR1 or DMT1 knockout mice (Santiago González et al., 2019 ). Moreover, the neurons of the cultured spinal ganglia, similar to those of the intact ganglia, uptake exogenous Fer (Rosenbluth & Wissig, 1964 ).
Fpn and Cp
SCs express Fpn and Cp, two proteins that partner to efflux iron from SCs (Camborieux et al., 1998 ; Schulz, 2011 ). The expression of Fpn mRNA is greater in differentiated PC12 cells than in undifferentiated cells (Helgudottir et al., 2019 ). Up‐regulation of Fpn has been shown after sciatic nerve crush injury (Schulz, 2011 ). The sciatic nerve crush injury in Cp knockout mice results in impaired axonal regeneration and motor recovery (Mietto et al., 2021 ; Schulz, 2011 ). Additionally, knocking‐out Cp in mature myelinating SCs reduces the expression of myelin proteins and induces oxidative stress (Santiago González et al., 2021 ). Knocking‐out Cp also causes increased levels of TfR1, DMT1, and Fer in SCs (Mietto et al., 2021 ).
Heme‐related proteins
In phagocytic SCs, HO‐1 is induced, which catalyzes the oxidation of heme to biliverdin, CO, and Fe 2+ (Hirata et al., 2000 ; Kim et al., 2019 ). HO‐1 is thought to be another source of iron overload in the cells (Liao et al., 2021 ). It also protects cells from the toxicity of free heme (Ryter, 2021 ). After PNI, HO‐1 up‐regulates in the dorsal root ganglion (DRG) and spinal cord (Chen, Chen, et al., 2015 ; Liu et al., 2016 ). Another study revealed that, after PNI, HO‐1 is expressed in the microglia of the spinal cord but not neurons and astrocytes (Liu et al., 2016 ). Induction of HO‐1 after PNI inhibits microglia activation (Liu et al., 2016 ), expression of pro‐inflammatory cytokines (Chen, Chen, et al., 2015 ), and neuropathic pain (Chen, Chen, et al., 2015 ; Liu et al., 2016 ). Hpx is a heme‐scavenger protein (Tolosano et al., 2010 ), which is up‐regulated in SCs and fibroblasts after PNI (Camborieux et al., 1998 ; Madore et al., 1994 , 1999 ; Swerts et al., 1992 ). Chronic axotomy results in sustained elevation of Hpx levels up to 3 months after nerve injury (Madore et al., 1994 ). After binding to heme, the heme‐Hpx complex enters the cell by binding to its receptor, low‐density lipoprotein receptor‐related protein (LRP‐1), and endocytosis (Tolosano et al., 2010 ). The expression of LRP‐1 has increased in SCs after PNI (Campana et al., 2006 ; Gaultier et al., 2008 ; Mantuano et al., 2008 , 2015 ), indicating the increased uptake of heme by SCs. TNF‐α can induce LRP‐1 expression in SCs (Campana et al., 2006 ). LRP‐1 plays a role in SC survival (Campana et al., 2006 ; Mantuano et al., 2011 ; Orita et al., 2013 ) and migration after PNI (Mantuano et al., 2008 ). In sum, it seems that after PNI, SCs employ all mechanisms of iron accumulation, from increased cellular uptake of iron and heme to iron release from heme by HO‐1 activity.
Iron
The iron concentration in normal sciatic nerve tissue of the rat is 36.90 ± 1.00 μg/g (Liu et al., 2022 ). After nerve crush injury, iron accumulation is observed at the lesion site and distal stump (Martinez‐Vivot et al., 2015 ; Raivich et al., 1991 ). In a recent study, the maximum intracellular iron concentration was observed at the lesion site and distal stump 2–3 weeks after nerve crush injury, which was approximately 2.5‐fold higher than the normal concentration (Martinez‐Vivot et al., 2015 ). Another study has reported a peak of radioactive iron uptake at the lesion site 3 days after a nerve crush injury (Raivich et al., 1991 ). PNI also leads to increased uptake and accumulation of iron in the central nervous system (Graeber et al., 1989 ; Xu et al., 2019 ). In a recent study, iron levels were increased in the spinal cord dorsal horn following chronic constriction injury of the sciatic nerve, first observed at 3 days with a peak at 7 days post‐injury (Xu et al., 2019 ). Furthermore, Perl's staining of the sciatic nerve explants has shown ferric ion deposition in 1 day, with a peak at 3 days post‐explant (Han et al., 2022 ).
ROLE OF IRON IN WD AND NERVE REGENERATION
Some authors have suggested that the up‐regulation of iron homeostasis proteins and high iron levels are needed for WD events (Hirata et al., 2000 ; Kim et al., 2019 ; Salis et al., 2007 ). HO‐1 is proposed to be involved in myelin degradation, SC de‐differentiation, and SC proliferation by inducing oxidative stress (Hirata et al., 2000 ; Kim et al., 2019 ). However, it has also been suggested that HO‐1 plays a role in cytoprotection against oxidative stress (Yardım et al., 2021 ) as the addition of Znpp, the competitive inhibitor of HO‐1, reduces the viability of serum‐deprived PC12 cells (Lin et al., 2010 ; Martin et al., 2004 ). In addition, the activation of the phosphatidylinositol 3‐kinase (PI3K)/Akt pathway, a survival signaling pathway (Barzegar‐Behrooz et al., 2022 ; Li et al., 2001 ), increases the expression of HO‐1 in PC12 cells (Martin et al., 2004 ). On the other hand, the literature has shown that an increased level of iron homeostasis proteins and iron accumulation is observed a few days after WD initiation (Camborieux et al., 1998 ; Hirata et al., 2000 ; Madore et al., 1994 , 1999 ; Martinez‐Vivot et al., 2015 ; Raivich et al., 1991 ; Salis et al., 2007 ; Schulz, 2011 ) when SCs have de‐differentiated (Hirata et al., 2000 ). Thus, it appears that iron is not required for the initiation of WD, SC de‐differentiation, and proliferation (Santiago González et al., 2019 ), but rather is required for the late stage of WD and initiation of nerve regeneration. Iron polarizes macrophages from a pro‐inflammatory M1 to an anti‐inflammatory M2 Phenotype (Agoro et al., 2018 ; Chen et al., 2020 ). Also, induction of HO‐1 inhibits microglia activation (Chen, Chen, et al., 2015 ) and expression of pro‐inflammatory cytokines after PNI (Liu et al., 2016 ). Exogenous iron uptake has increased in differentiated PC12 cells compared to non‐differentiated cells (Mwanjewe et al., 2001 ). Moreover, iron promotes SC differentiation in the culture (Salis et al., 2002 , 2012 ). Iron increases cAMP levels and CREB phosphorylation, which induces the expression of myelin proteins (Salis et al., 2012 ). It has been suggested that iron accumulation is required for the initiation of myelination in the CNS, as high iron concentration is present in myelin and cytoplasm of oligodendrocytes (Connor & Menzies, 1996 ). Iron is a cofactor for enzymes responsible for the synthesis and degradation of myelin lipids, such as fatty acid desaturase and lipid dehydrogenases (Connor & Menzies, 1996 ). Also, since iron is a cofactor for many enzymes involved in protein synthesis (Pain & Dancis, 2016 ), high iron levels may be required for the increase in the synthesis of neurotrophic factors and their receptors by SCs that should be elucidated (Han et al., 2022 ). After PNI, there is an increased expression of collagen and procollagen hydroxylases (Araki et al., 2001 ; Chen, Cescon, et al., 2015 ; Chernousov et al., 1999 ; Isaacman‐Beck et al., 2015 ; Siironen et al., 1992a , 1992b , 1996 ) requiring iron as a cofactor (Gelse et al., 2003 ). Collagen types IV, V, and VI are a component of SC basal lamina involved in SC migration, spreading, myelination, M2 macrophage polarization, axonal growth, and axonal guidance (Chen, Cescon, et al., 2015 ; Chernousov et al., 2001 , 2006 ; Erdman et al., 2002 ; Fang & Zou, 2021 ; Isaacman‐Beck et al., 2015 ; Lv et al., 2017 ; Sun et al., 2022 ). SCs can proliferate and normally grow on the electrospun silk fibroin scaffolds with different concentrations of incorporated iron oxide nanoparticles (1–10 wt. % iron oxide nanoparticles) (Taneja, 2013 ). SC growth was better on scaffolds containing a higher concentration of incorporated iron particles (7 wt. %). Furthermore, nerve growth factor (NGF) levels were higher in the electrospun silk fibroin scaffolds containing a concentration of 3 wt. % iron oxide nanoparticles than scaffolds with no or lower concentrations of iron oxide nanoparticles (Taneja, 2013 ). However, NGF levels were reduced in scaffolds with a concentration of 5 wt. % iron oxide nanoparticles (Taneja, 2013 ). Iron up to a concentration of 500 μM has not shown any cytotoxicity against PC12 cells for up to 5–6 days (Hong et al., 2003 ; Kim et al., 2011 ). Iron nanoparticles also have any cytotoxicity against cultured Schwann cells up to a concentration of 2 μg/mL (intercellular iron concentration of 1.21 ± 0.08 pg/cell) for 72 h (Xia et al., 2016 , 2020 ). Moreover, the iron solution (FeCl 2 ) with concentrations of 10, 100, or 500 mM in combination with NGF increases the viability of the serum‐deprived PC12 cells (about 2‐fold) compared to NGF treatment alone (Hong et al., 2003 ). In PC12 cells, iron causes a dose‐dependent increase in the expression of p‐ERK, p‐Bad, and Bcl‐2 (Kim & Yoo, 2013 ). The phosphorylated Bad and Bcl‐2 are anti‐apoptotic proteins that reduce the release of cytochrome c from mitochondria (Kim & Yoo, 2013 ; Yardım et al., 2021 ). Furthermore, iron enhances the viability of SCs (Han et al., 2022 ). In a recent study, SC viability was approximately 140% in a 2.5 mM ferric ammonium citrate solution, but it decreased with higher concentrations (Han et al., 2022 ). Then, it seems, the effects of iron on the cells depend on its concentration and the cellular capacity of iron chelating (Zhao et al., 2013 ). Iron acts as a redox sensor in the cell (Outten & Theil, 2009 ) and also increases the synthesis of intracellular antioxidants such as glutathione (Cozzi et al., 2013 ; Lall et al., 2008 ). Therefore, during WD, an increase in iron levels in SCs may have a protective effect. Neurite outgrowth initiation and elongation are hindered by the iron chelator addition to the DRG culture (Schulz, 2011 ). Previous studies have demonstrated that iron enhances neurite outgrowth in cultured PC12 cells, whether with or without the presence of NGF (Hong et al., 2003 ; Katebi et al., 2019 ; Kim et al., 2011 ; Sadeghi et al., 2023 ; Zarei et al., 2022 ). The effect of iron on the neurite outgrowth of PC12 cells is believed to be mediated by integrin β1 (Hong et al., 2003 ; Kim et al., 2011 ). The expression of integrin β1 in NGF‐treated PC12 cells increases as a result of an increase in iron concentration in the culture (Kim et al., 2011 ). Studies have shown that integrin β1 is involved in SC myelination (Nodari et al., 2007 ; Pellegatta et al., 2013 ). Inhibiting the integrin β1 function prevents myelination and causes a demyelinating neuropathy with disrupted radial sorting of axons (Nodari et al., 2007 ; Pellegatta et al., 2013 ). Integrin β1‐null SCs can migrate and proliferate but do not extend processes around axons (Nodari et al., 2007 ; Pellegatta et al., 2013 ). Iron enhances the NGF signaling in PC12 cells (Kim et al., 2011 ; Yoo et al., 2004 ). It increases the levels of p‐ERK 1/2 in a dose‐dependent manner (Kim et al., 2011 ; Yoo et al., 2004 ). Phosphorylated ERK enhances SC survival and axonal outgrowth (Hausott & Klimaschewski, 2019 ). Axonal growth cones have abundant mitochondria, providing ATP required for protein synthesis, cytoskeleton assembly, and axonal transport. Regarding the role of iron in ATP synthesis and the mitochondria function, it is conceivable that the growth cone has a high iron demand (Schulz, 2011 ). Schulz suggested that SCs deliver the iron required for the growth cone mitochondria (Schulz, 2011 ) (Figure 2 ). Knocking out the gene of Cp, an iron exporter, on Schwann cells reduces mitochondrial ferritin (a marker of mitochondrial iron content) in axons and impairs nerve regeneration following PNI (Schulz, 2011 ). Iron overload increases the active matrix metalloproteinase‐9 (MMP‐9) and MMP‐1 levels in the CNS (García‐Yébenes et al., 2018 ; Mairuae et al., 2011 ). Elevated levels of MMP‐9 are also reported after PNI (Remacle et al., 2018 ; Siebert et al., 2001 ). Matrix metalloproteinase‐9 has been implicated in macrophage recruitment, SC migration and differentiation, axonal outgrowth, and remyelination after PNI (Verslegers et al., 2013 ). Migration of SCs can be promoted by the Hpx domain of MMP‐9 and LRP‐1 (Mantuano et al., 2008 ), which are both up‐regulated after PNI. The peripheral nerve injury increases the levels of iron homeostasis proteins and iron in the DRG and dorsal horn of the spinal cord beyond the lesion site (Chen, Chen, et al., 2015 ; Liu et al., 2016 ; Xu et al., 2019 ), which is likely the result of retrogradely transported signals from the lesion site (Mietto et al., 2021 ). As mentioned above, the majority of the studies have focused on the proteins involved in iron homeostasis, and relatively little is known about the effects of iron excess or deficiency on peripheral nerve degeneration and regeneration. Systemic or local administration of Fe 3 O 4 nanoparticles after PNI improves the morphological, functional, and electrophysiological indices of the rat sciatic nerve (Chen et al., 2020 ; Pop et al., 2021 ; Tamjid et al., 2023 ). Intraperitoneal administration of omega‐3‐coated Fe 3 O 4 nanoparticles, either a dosage of 10 mg/kg/day or 30 mg/kg/day for 1 week, has improved morphological and functional indices of the rat sciatic nerve after nerve crush, with greater effects observed at a dosage of 30 mg/kg (Tamjid et al., 2023 ). Also, the oral administration of chitosan‐coated iron nanoparticles (2.5 mg/kg/day) for 21 days improves the morphological and functional indices of the sciatic nerve and slightly increases serum NGF levels after sciatic nerve compression injury (Pop et al., 2021 ). In a recent study, using a multilayered nerve conduit loaded with melatonin and Fe 3 O 4 nanoparticles improved the morphological, functional, and electrophysiological indices of the rat sciatic nerve at 16 weeks post‐operation (Chen et al., 2020 ). The multilayered nerve conduit loaded with melatonin and Fe 3 O 4 magnetic nanoparticles induced the macrophage polarization to the M2 phenotype in the nerve (Chen et al., 2020 ). Moreover, the loading of conduits with melatonin and Fe 3 O 4 nanoparticles decreased the expression of pro‐inflammatory cytokines (IL‐6, TNF‐α, and IFNγ), neuronal nitric oxide synthase, and vimentin (a marker of fibrosis) in the nerve (Chen et al., 2020 ). On the other hand, it increased the expression of the anti‐inflammatory cytokine IL‐10, S100 (Schwann cell marker), neurofilament protein 200 (a neuronal marker), MBP, and β3‐tubulin (a neuronal marker) (Chen et al., 2020 ). In a recent study, systemic administration of iron solution exacerbated the DRG neuronal loss caused by sciatic nerve transection, as was demonstrated by the decreased mean number of neurons and volume of DRG (Mohammadi‐Abisofla et al., 2018 ). Following iron administration, neuronal loss in the DRG of the injured nerve has been observed (Mohammadi‐Abisofla et al., 2018 ), while iron usually accumulates in the DRG without significant toxicity after PNI. It may be explained by a decreased iron‐chelating capacity at a specific iron concentration caused by an immediate increase in intracellular iron levels after iron administration.
AUTHOR CONTRIBUTIONS
Conceptualization, S.B. and M.B.R.; writing‐original draft preparation, S.B. and M.B.R.; writing—review and editing, S.B. and M.B.R.; supervision, S.B. and M.B.R. All authors have read and agreed to the published version of the manuscript.
FUNDING INFORMATION
No funding information provided.
CONFLICT OF INTEREST STATEMENT
The author declares no conflict of interest, financial,d or otherwise.
ETHICS STATEMENT
Not applicable. | ACKNOWLEDGMENTS
Declared none.
DATA AVAILABILITY STATEMENT
Not applicable. | CC BY | no | 2024-01-16 23:35:06 | Physiol Rep. 2024 Jan 4; 12(1):e15908 | oa_package/eb/c5/PMC10766496.tar.gz |
|||
PMC10766658 | 38175303 | Introduction
Childhood is a sensitive period in life, with rapid bodily, neurological, cognitive, emotional, and social development. Experiencing multiple adverse events during childhood such as losing a parent, physical abuse, or having a parent with a mental illness are known risk factors for physical and mental health problems in adulthood [ 1 ]. Adversity in early childhood might lead to lifelong impairments in health [ 2 , 3 ]. People who experience more adverse events during childhood are more likely to develop chronic illnesses such as heart disease, respiratory disease, and cancer [ 4 ]. Also, they are susceptible for developing depression, anxiety, and posttraumatic stress disorders [ 5 – 7 ]. The reasons why adverse childhood events may lead to poor health are diverse and still not completely understood, but sustained activation of the stress response system is assumed to be at the heart of this relationship. That is, chronic negative environmental factors may lead to disruption of the neuroendocrine and immune systems, in brain development as well as in learning abilities and responses to stress in the future [ 8 , 9 ]. These disruptions are in turn linked to poorer health outcomes and increased mental problems. Moreover, both attachment theory [ 10 ] and schema-based cognitive models of mental health problems [ 11 ] argue that people may develop maladaptive schematic representations of the self (e.g., as incompetent), others (e.g., as not to be trusted) and the world (e.g., as unsafe) when confronted with adversities during childhood. These schemas impact how people appraise and deal with relationships and stressors in life making them more prone to develop mental health problems. Adverse childhood events (ACE) may become especially deleterious when confronted with catastrophic events, such as a cancer diagnosis, later in life [ 12 ] as it activates the stress system and (maladaptive) schemas. Consequently, people with ACE may be susceptible to stress-related problems and may have more problems dealing with the adversities and challenges imposed by the illness, making them susceptible for mental health problems.
A large body of literature shows that cancer diagnosis and treatment may be associated with emotional problems, impaired quality of life, and chronic fatigue in a substantial subgroup of cancer survivors [ 13 , 14 ]. Identifying people who are susceptible for developing mental health problems when confronted with cancer is essential as it may guide patient management and interventions. Whether people with ACE may be at risk of mental health problems when confronted with cancer is, however, less known.
Therefore, the aim of this study is to systematically review the literature on the association between ACE and mental health problems in cancer survivors. Insight into the relationship between ACE and mental health problems among cancer survivors may help to identify who is at risk for mental health problems. This knowledge might lead to the improvement of care for cancer survivors. | Method
Data sources and search strategy
A systematic review of the literature up to August 27th, 2023 was conducted in line with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. A total of four databases (PubMed, PsychINFO, Web of Science, and Cochrane) were searched for relevant papers. A combination of search terms from the following concepts was used: adverse childhood events AND cancer AND psychological outcomes. The detailed list of search terms associated with each concept included in the search is provided in Table 1 . The search strategy and selection of papers were guided by the research question: “What is the association between ACEs and mental health problems in cancer survivors?” We used the definition of the American Cancer Society when using the term “cancer survivor,” meaning that we considered anyone who has ever been diagnosed with cancer, no matter where they are in the course of their disease, to be a survivor. When performing the search, a filter for language was applied, including only articles in the English language.
Selection procedure
One author participated in the process of literature retrieval. Articles retrieved from the database searches were exported to a reference library (EndNote) and combined into one database, in which duplicates were deleted. Then, two authors screened all articles based on title and abstract and excluded papers on irrelevant topics. After, the full texts of the articles not having been excluded were read and labeled by three authors to come to the final selection. Inconsistencies between authors during the review process were resolved through discussion until consensus was achieved.
Both observational (cross-sectional, cohort, retrospective, and longitudinal) and intervention studies were included. Articles were included if the study reported an ACE as a measure correlated to psychological outcomes. Articles with adult life stress instead of ACEs were excluded. Articles assessing the relationship of ACEs as a risk factor for cancer or as a correlate to screening behavior were also excluded. Furthermore, articles were excluded if the described study was not original research (e.g., a review article or letter to the editor), not peer-reviewed (e.g., conference proceeding, thesis), if the study population did not (only) exist of cancer patients, or if the study population consisted of children instead of adults.
Data extraction
For an overview of the number of papers in and excluded (see Fig. 1 ). For each article included in the present review, the following data were extracted and described: first author and year of publication, cancer population, sample size (including mean age), study design, ACE measurement used, prevalence of the ACEs. | Results
In Fig. 1 , a flow chart is depicted of the inclusion and exclusion of articles derived from the database searches. In total, 1413 references were found, and after the removal of 295 duplicates, 1118 unique articles were retrieved. These articles were assessed for eligibility and 79 full-text articles were assessed. Finally, 25 articles were included.
General characteristics of the included studies
The majority of the 25 studies were conducted in the USA ( n = 19, 76.0%) (see Table 2 ) [ 15 – 33 ]. The other studies were conducted in the UK [ 34 , 35 ], Brazil [ 36 ], Turkey [ 37 ], and China [ 38 , 39 ]. The number of participants in these studies ranged from 20 [ 21 ] to 1343 [ 31 ]. Median sample size was 110 (Q1 = 64, Q3 = 271, IQR = 207). Most of the studies were conducted in breast cancer survivors ( n = 16, 64%) [ 16 – 25 , 29 , 30 , 33 , 35 , 37 , 38 ]. One study was conducted in lung cancer patients [ 27 ], one in head and neck squamous cell cancer (HNSCC) patients [ 36 ], one in ovarian cancer patients [ 15 ], and one in hematologic cancer survivors [ 26 ]. Five studies (20.8%) were conducted in survivors of mixed cancer types [ 28 , 31 , 32 , 34 , 39 ].
Type of ACE measurements
The Childhood Trauma Questionnaire (CTQ) (or subscale) was used to measure ACE in more than half of the studies ( n = 13, 52%) [ 16 , 18 – 21 , 24 , 29 , 32 – 34 , 36 , 37 , 40 ]. The CTQ is a 28-item inventory that provides a reliable and valid screening for a history of abuse and neglect [ 41 ]. Four studies [ 17 , 25 – 27 ] used the Risky Families Questionnaire (RFQ)[ 42 ], one study [ 22 ] used the Traumatic Events Survey (TES)[ 43 ], two studies [ 15 , 23 ] used the Childhood Traumatic Events Scale (CTES) [ 44 ], one study [ 38 ] used the Adverse Childhood Experience Questionnaire (ACEQ)[ 45 ], one; [ 28 ] the ACE Questionnaire by the Center of Disease Control (CDC)[ 46 ], and one [ 31 ] the Life Stressor Checklist-Revised (LSC-R)[ 47 ]. Furthermore, the semi-structured interview Life events and difficulties schedule (LEDS) [ 48 ] was used [ 15 ], and one study used non-validated self-report questions regarding ACEs [ 35 ]. The percentage of patients who reported at least one incidence of ACE ranged from approximately 40.0 [ 18 ] to 95.5% [ 36 ]. For a detailed overview of the studies and the prevalence of ACE, see Table 2 .
ACEs and mental health
Depression and anxiety
The association between exposure to ACEs and depression was investigated in 12 of the 24 included studies [ 21 , 22 , 24 – 28 , 30 , 32 , 34 , 36 , 37 ]. Childhood adversities were significantly associated with higher levels of depressive symptoms in patients with cancer in 10 of these studies [ 21 , 24 – 27 , 30 , 32 , 34 , 36 , 37 ]. In two studies [ 22 , 28 ] no association between ACE and depression was found.
Nine studies investigated the relationship between ACEs and anxiety [ 15 , 22 , 23 , 25 – 27 , 29 , 36 , 37 ]. Associations with higher levels of anxiety were found in seven of these studies [ 15 , 25 – 27 , 29 , 36 , 37 ]. In two studies, elevated scores of depression and anxiety were associated with ACEs in univariate analyses, while in multivariate analysis involving depression, anxiety, distress, and/or physical symptoms only the relationship between ACEs and depression remained significant [ 25 , 27 ].
Some studies looked not only at childhood adversities in general (i.e., total score) but also at specific adversities (i.e., subscales) [ 25 , 26 , 30 , 36 ]. In one study all the subscales of the RFQ were significantly associated with higher levels of anxiety, depression, and distress [ 25 ], while in another study using the RFQ, differences between the types of adverse events were found [ 26 ]. That is, the abuse subscale was associated with distress, the chaotic home environment was associated with higher levels of distress and anxiety, and the neglect subscale was not associated with any of these outcomes [ 26 ]. Additionally, the CTQ subscales were found to be differently associated with psychological variables. That is, physical neglect was found to be associated with higher anxiety levels, whereas physical abuse and emotional neglect were not. Emotional abuse, physical abuse, and physical neglect were all associated with higher levels of depression [ 36 ]. In a study among breast cancer survivors using the CTQ, emotional neglect and abuse were associated with higher initial levels of depression, but not with changes in depressive symptoms over time, whereas physical neglect was a significant predictor of higher levels of stress over time, but not of the initial stress level [ 40 ].
Fatigue
Seven studies investigated the relationship between ACEs and fatigue during and/or after cancer treatment. Cancer patients who had been exposed to ACEs experienced higher levels of fatigue in six of the seven studies [ 16 – 19 , 21 , 40 ]. One study identified five distinct groups of fatigue trajectories: women who experienced consistently low, low and decreasing, low and then increasing, high and then decreasing, and persistently elevated levels of fatigue. Women who had been exposed to ACEs more were more likely to suffer from consistently high levels of fatigue or experienced higher levels immediately after treatment and then recovered rather than having low and then increasing levels of fatigue [ 17 ]. In a study among breast cancer survivors [ 16 ], a dichotomous ACE score was associated with higher fatigue, but the severity of ACE was not. In another study among breast cancer survivors, the emotional neglect and abuse subscales of the CTQ were associated with initial fatigue levels, but not with changes in fatigue over time [ 40 ]. Moreover, survivors who suffered from severe pain and high sleep disturbance reported the highest rates for family violence in childhood, forced touching, and forced sex at an age younger than 16 compared to people who did not suffer from pain and reported moderate sleep disturbance, or moderate pain and moderate sleep disturbance [ 31 ]. Regarding physical abuse, the difference was only significant between the severe and the no pain group [ 31 ].
Other psychological variables and mechanisms
Moreover, ACEs were also found to be associated with more cancer-related traumatic symptoms [ 32 ]. Specifically, intrusive thoughts were correlated with having experienced emotional, physical, and sexual abuse [ 20 ]. Furthermore, exposure to ACEs was associated with elevated levels of cancer-related psychological distress [ 19 , 26 , 27 ], perceived stress [ 21 , 30 ], sleep disturbance and sleep-related impairment [ 23 ], and suicidal ideation [ 39 ]. Moreover, ACEs were also associated with worse emotional well-being [ 19 ], worse quality of life [ 30 , 38 ], and with an increase in negative adjustment and a decrease in positive adjustment after cancer [ 37 ].
Other influential relationships were investigated. Social support seems to mediate the relationship between ACEs and quality of life [ 19 ] and marital status may buffer the effect of childhood adversities on fatigue and depression [ 21 ]. Moreover, women with breast cancer who experienced ACE showed an elevated cortisol and proinflammatory cytokine release, especially when showing reduced parasympathetic activity during real-time stress (Trier Social Stress Test) [ 29 ]. It was also found that the differences in psychological well-being between people who had been exposed to ACEs compared to those who had not were similar during the diagnostic and the treatment phase [ 21 ]. Additionally, one study found that people with ACEs experienced less social and professional support during cancer treatment [ 35 ] and another study [ 33 ] showed that they may profit from mindfulness-based therapy. | Discussion
The aim of this systematic literature review was to investigate the association between ACEs and psychological problems in cancer survivors. Although variations were found, and not all studies reported an association between ACEs and mental health problems in cancer survivors, the majority did. On the basis of this review, it seems safe to state that ACEs are prevalent (> 50%) and seem to be a risk factor for more emotional distress, anxiety, depressive symptoms, and fatigue in cancer survivors. If this is true, the next question is what to do with this knowledge?
The most obvious option is to start screening patients on whether or not they have experienced ACEs. This may, however, be rather disturbing for patients [ 49 ], and physicians and nurses might be reluctant to do this with questionnaires as it is experienced as too upsetting for patients [ 50 ]. A more viable option might be to teach the medical staff to ask patients whether or not they have had experiences during childhood, that they feel may impact their needs and abilities during the illness process [ 8 ]. In a study by Clark (2014), women with breast cancer emphasized the importance of asking about adversities (including abuse) as it may give them the opportunity and choice to disclose adversities and thereby tailor support [ 49 ]. Recently, scholars have started to develop ideas about how health care can be adapted in such a way that it takes childhood adversities into account [ 9 , 51 ]. This so-called trauma-informed care (TIC) recognizes and responds to the impact of trauma on individuals seeking healthcare. It is an approach that emphasizes safety, trustworthiness, choice, collaboration, and empowerment for individuals who have experienced trauma [ 52 ].
While this review suggests that ACE is a risk factor for psychological problems in cancer survivors, many questions remain unanswered. First, are specific ACEs a risk factor for specific psychological problems (e.g., anxiety, depression, PTSD) in patients with cancer? Previous research, among non-somatically ill patients, suggests that different types of adversities (e.g., neglect, abuse) make people susceptible for specific mental health problems (anxiety, depression, PTSD) [ 53 , 54 ]. For example, Veen et al. (2013) [ 55 ] found that emotional neglect was particularly associated with anhedonia and sexual abuse with anxious arousal.
Moreover, do specific childhood adversities have a specific impact on patients with cancer depending on illness characteristics such as illness phase (i.e., diagnostic, treatment, follow-up, palliative) and type of treatment? It could be argued that certain ACEs may be especially influential during the treatment process. For example, for patients who have experienced sexual trauma, brachytherapy, where a radiation source is placed inside the vagina or prostate, may be particularly distressing and potentially retraumatizing [ 56 ]. Other ACEs such as emotional abuse may for example have an especially deleterious effect on emotional recovery after treatment.
Furthermore, not only the type of adversity may have a specific impact on mental health, but also the frequency since previous studies suggested that ACEs have a dose–response effect on health [ 42 ] with more ACEs (> 2) being more deleterious than one or two. Whether this is also true for people’s mental health when confronted with cancer is unclear. Moreover, the majority of included studies were conducted in Western countries and in women with breast cancer and therefore may not be generalizable to cancer patients in general.
Another topic in which many questions remain largely unanswered is the mechanisms by which ACEs influence mental health problems in cancer survivors. In general, ACEs are believed to influence (mental) health through biological (altered stress and inflammatory responses, epigenetic alterations), psychological (developmental maladaptive schemas, cognitive distortions, unhealthy behaviors), and social (adverse social circumstances) processes [ 57 – 60 ]. How these mechanisms play a role in the different mental health problems survivors face (i.e., fatigue, anxiety, anhedonia, separation distress) may be a focus of attention in future studies. | Conclusion
Childhood adversities are prevalent and a risk factor for psychological problems in patients diagnosed with cancer. Recognizing the prevalence of ACE and its impact on mental health in cancer survivors and responding in a way that prevents re-traumatization and promotes resilience should become a focus of attention in cancer care. | Purpose
The purpose of this study was to systematically review the literature on the association between adverse childhood events (ACEs) and mental health problems in cancer survivors.
Methods
This review was conducted in line with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Four databases (PubMed, PsychINFO, Web of Science, and Cochrane) were searched on 27–08-2023.
Results
Of the 1413 references yielded by the literature search, 25 papers met inclusion criteria and were reviewed. Most studies were performed in the USA, most included breast cancer survivors, and the number of included participants ranged between 20 and 1343. ACEs were relatively prevalent, with self-report rates ranging between 40 and 95%. Having been exposed to ACEs was a risk factor for heightened levels of emotional distress, anxiety, depressive symptoms, and fatigue during cancer treatment. Results varied depending on the variables included, and per subscale, but were consistent across different cultures and heterogenous patient groups.
Conclusion
The association between ACE and mental health outcomes was significant in most studies. In order to improve treatment for this vulnerable population, it may be necessary to screen for ACEs before cancer treatment and adjust treatment, for example, by means of trauma-informed care (TIC), which recognizes and responds to the impact of trauma on individuals seeking healthcare.
Keywords | Author contribution
CH had the idea for the article, EH and FT performed the literature search, CH, EH, FT, performed data analysis, CH and FM drafted and critically revised the work.
Declarations
Ethics approval and consent to participate
Given the nature of the article, no approval by a certified Medical Ethics Committee was acquired. The authors consent to participate in this review.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:35:01 | Support Care Cancer. 2024 Jan 4; 32(1):80 | oa_package/b3/68/PMC10766658.tar.gz |
|
PMC10768880 | 38188453 | Methods
Study Design and Participants
This cross-sectional study used the baseline data from the Bunkyo Health Study [ 11 ]. Briefly, we recruited 1629 individuals aged 65 to 84 years living in Bunkyo-ku, an urban area in Tokyo, Japan from October 15, 2015, to October 1, 2018. Among the Bunkyo Health Study participants, we only included those who had not been diagnosed with diabetes and had available 75-g OGTT data. Among the study participants, 187 were diagnosed with diabetes and 75-g OGTT data were unavailable for 4 participants. The remaining 1438 participants were included in this study.
The study protocol was approved by the ethics committee of Juntendo University in November 2015 (Nos. 2015078, and M15-0057). Briefly, subjects were evaluated over 2 days. On the first day, we evaluated cognitive function, muscle strength, and physical performance. On the second day, after an overnight fast, we evaluated body weight and composition with dual-energy X-ray absorptiometry (DXA), abdominal fat distribution with magnetic resonance imaging (MRI), and glucose tolerance with a 75-g OGTT. This study was carried out in accordance with the principles outlined in the Declaration of Helsinki. All participants gave written informed consent and were informed that they had the right to withdraw from the study at any time.
Procedure for a 75-g OGTT and Definition of Glucose Tolerance
All participants underwent a 75-g OGTT. In this study, a standard 75-g OGTT was carried out in the morning after an overnight fast. The participants are instructed to eat a well-balanced diet for 3 days prior to the test, to refrain from a low-carbohydrate diet. Blood samples were collected immediately before, as well as 30, 60, 90, and 120 minutes after ingestion of glucose. We measured hemoglobin A1c (HbA1c) on the same day. According to the diagnostic criteria for diabetes of the Japan Diabetes Society, diabetes mellitus was defined as fasting plasma glucose (FPG) ≥ 126 mg/dL and/or a 2-hour glucose level after the 75-g OGTT ≥ 200 mg/dL and/or HbA1c ≥ 6.5% [ 12 ]. Normal glucose tolerance (NGT) was defined as FPG < 110 mg/dL, a 2-hour glucose level after the 75-g OGTT < 140 mg/dL, and hemoglobin A1c < 6.5%. The remaining participants were defined as having prediabetes.
Evaluation of β-Cell Function, Insulin Sensitivity, and Insulin Clearance
The homeostasis model assessment of insulin resistance index (HOMA-IR) was calculated as [fasting serum insulin (μU/mL) · FPG (mg/dL)/405]. The Matsuda index was calculated using the following equation: [10 000/square root of (FPG [mg/dL] · fasting insulin [μU/mL]) · (mean glucose [mg/dL] · mean insulin during OGTT [μU/mL])] [ 13 ]. The insulinogenic index, reflecting early-phase glucose-dependent insulin secretion, was calculated using the following equation: [change in insulin/change in glucose from 0 to 30 minutes] [ 14 ]. Insulin secretion in response to blood glucose levels was also evaluated based on the ratio of the area under the curve (AUC) for insulin to glucose during OGTT (AUC-insulin/AUC-glucose). β-cell function was evaluated based on the disposition index (Matsuda index · AUC-insulin/AUC-glucose) [ 15 ]. Adipose tissue insulin resistance index (Adipo-IR) was calculated as [fasting insulin (μU/mL) · fasting FFA (mEq/L)] [ 16 ]. Serum insulin levels were measured by chemiluminescent enzyme immunoassay (FUJIREBIO Inc., Tokyo, Japan, Cat# 291290, RRID: AB_3065260). FFA was measured by an enzymatic method (SEKISUI Medical Co., Ltd, Tokyo, Japan). Insulin clearance was calculated as [C-peptide (ng/mL)/insulin (μU/mL)].
Measurement of Visceral Fat Area and Subcutaneous Fat Area
Intra-abdominal fat area and subcutaneous fat area were measured with a 0.3-T MRI scanner (AIRIS Vento, Hitachi, Japan) as described previously [ 11 ]. Briefly, T1-weighted transaxial scans were obtained. Intra-abdominal fat area and subcutaneous fat area at the fourth and fifth lumbar interspaces were measured as described previously using specialized image analysis software (AZE Virtual Place, Canon Medical Systems Corporation, Japan).
Other Measurements
Appendicular skeletal muscle mass (ASM) was measured using DXA (Discovery DXA System, Hologic, Tokyo, Japan). Skeletal muscle mass index (SMI) was calculated by dividing ASM by height squared in meters (kg/m 2 ). Handgrip strength was measured twice on each side using a hand grip dynamometer (T.K.K.5401, Takei Scientific Instruments Co., Ltd., Japan). We used the average of the maximum values on each side for handgrip strength. Physical activity was evaluated using the International Physical Activity Questionnaire, which assesses different types of physical activity, such as walking and both moderate- and high-intensity activities [ 17 , 18 ]. Nutritional status was evaluated using a brief self-administered diet history questionnaire that contained 58 items about fixed portions and food types [ 19 , 20 ]. Hypertension was defined as systolic blood pressure ≥ 140 mmHg, diastolic blood pressure ≥ 90 mmHg or current use of antihypertensive medications. Dyslipidemia was defined as low-density lipoprotein cholesterol ≥ 140 mg/dL, high-density lipoprotein cholesterol < 40 mg/dL, triglycerides ≥ 150 mg/dL, or current use of lipid-lowering agents. Cardiovascular disease was defined by the World Health Organization as ischemic heart disease, cerebrovascular disease, or peripheral arterial disease. Sarcopenia was defined as weak handgrip strength and low SMI based on the definition of the Asian Working Group for Sarcopenia (AWGS) 2019 [ 21 ].
Statistical Analysis
We classified participants by age (65-69 years, 70-74 years, 75-79 years, and 80-84 years) and divided them into 3 groups (NGT, prediabetes, and diabetes). We used IBM SPSS Statistics for Windows, version 28.0. (IBM Corp., Armonk, NY, USA) for statistical analysis. Data are presented as means ± SD, means ± SE, numbers (%), and medians (interquartile range), as appropriate. To approximate the normal distribution, log-transformed values were used in the analysis, as appropriate. Differences in means and proportions were tested using one-way analysis of variance (ANOVA) and chi-square tests. The Jonckheere-Terpstra trend test was used to investigate trends with aging. Differences of AUC-insulin/AUC-glucose, insulinogenic index, Matsuda index and disposition index were tested using ANCOVA with adjustment for sex, hypertension, dyslipidemia, cardiovascular disease, and sarcopenia. The relationship between the Matsuda index, disposition index, and various metabolic parameters was assessed with Pearson or Spearman correlation coefficients, as appropriate. Multiple regression analysis was performed to determine the independent contribution of insulin resistance and β-cell function. In this study, 2 models were used in regression analyses. Model 1 adjusted for age, sex, and visceral fat area (VFA). Model 2 adjusted for variables in Model 1 plus subcutaneous fat area (SFA), ASM, handgrip strength, FFA, adiponectin, C-reactive protein (CRP), and physical activity. All statistical tests were two-sided with a 5% significance level. | Results
The characteristics and metabolic parameters of the study participants are shown in Table 1 . There were no differences in the proportion of men and women across groups. The proportion of participants with normal glucose tolerance was significantly lower in the older group, while the proportion of participants with diabetes was significantly higher in the older group. Although physical activity levels and sedentary time were comparable among the groups, energy intake was higher in the older group, with carbohydrate intake significantly higher in the group aged 80 to 84 years than in the 2 groups aged under 75 years, and protein intake was significantly higher in the 2 groups older than 75 years than in the 2 groups younger than 75 years. Body weight was lower, and height was relatively lower in the older group, resulting in higher body mass index (BMI) in the older group. In terms of body composition, percent body fat was higher in the 2 groups older than 75 years than in the age group 65 to 69 years. Although ASM was significantly lower in the 2 groups aged over 75 years than in the 2 groups aged under 75 years, SMI was comparable among the groups. Handgrip strength was significantly higher in the age group aged 65 to 69 years than in the 3 groups aged over 70 years and significantly higher in the age group 70 to 74 years than in the 2 groups aged over 75 years. VFA was significantly higher in the age 80 to 84 age group than in the 65 to 69 age group, while SFA was comparable among the groups. The prevalence of sarcopenia, hypertension, and cardiovascular disease increased with age. Trend analysis also showed similar age-related changes in these parameters except for physical activity level and alcohol intake.
FPG and fasting serum insulin levels were similar among the groups. However, HbA1c, AUC-glucose, and AUC-insulin were significantly higher in the 3 groups aged over 70 years than in the age 65 to 69 group. The Matsuda index was significantly lower in the age 80 to 84 group than in the age 65 to 69 group, suggesting a decrease in insulin sensitivity with age ( Table 2 ). The insulin secretion index, calculated as AUC-insulin/AUC-glucose, and the insulinogenic index were comparable among the groups. The disposition index was significantly lower in the group aged 70 to 74 years and the group aged 80 to 84 years compared with the group aged 65 to 69 years, suggesting a decrease in β-cell function with age. Matsuda index and disposition index did not differ between groups older than 70 years, but the trend analysis showed that these decreased with age. Fasting FFA levels was significantly higher in the 3 groups aged over 70 years compared with the group aged 65 to 69 years. FFA levels were also higher in the 2 groups aged over 75 years than in the group aged 70 to 74 years. AUC-FFA during OGTT was also significantly higher in the 3 groups older than 70 years compared with the group aged 65 to 69 years and higher in the group aged 80 to 84 years than in groups younger than 80 years. Adipo-IR was significantly higher in the 2 groups aged over 75 years compared to the age 65 to 69 group. Adiponectin was slightly higher and AUC-insulin clearance was slightly lower in the older group, respectively. Trend analysis also showed similar age-related changes in these parameters.
Next, we evaluated the relationship between insulin secretion and insulin sensitivity with age based on glucose tolerance. We used the mean value and standard error of AUC-insulin/AUC-glucose and the Matsuda index in 3 groups (NGT, prediabetes, diabetes mellitus) ( Fig. 1 ). In the NGT groups, insulin sensitivity seems to decline with age and higher insulin secretion associated with aging compensated for the decrease in insulin sensitivity. However, a clear relationship between insulin secretion or insulin sensitivity with age was not observed in participants with prediabetes or diabetes.
Since the Matsuda index and disposition index decreased with age, we tried to identify their determinants as the next step. We performed a simple correlation analysis between the Matsuda index or disposition index and various metabolic parameters related to fat accumulation (VFA, SFA, FFA, adiponectin, and CRP), muscle mass (SMI and ASM), muscle strength, and physical activity that have been previously reported to be associated with insulin action [ 22 ]. As shown in Table 3 , the Matsuda index was significantly correlated with age, VFA, SFA, SMI, AMI, FFA, adiponectin and physical activity levels. The disposition index was significantly correlated with all of the parameters evaluated except for physical activity. VFA had the greatest correlation coefficient for both. In addition, SMI and ASM were negatively correlated with the Matsuda index and disposition index. Multiple regression revealed that VFA, SFA, and FFA were negatively correlated and adiponectin and physical activity were positively correlated with the Matsuda index. Similarly, VFA, SFA, and FFA were negatively correlated and adiponectin was positively correlated with the disposition index ( Table 4 ). Among the parameters evaluated, VFA and FFA had a relatively high correlation with both the Matsuda index and disposition index. On the other hand, we were not able to find the independent correlation of ASM and handgrip strength with the Matsuda index or the disposition index. | Discussion
The purpose of this study was to investigate whether insulin sensitivity and β-cell function deteriorate after the age of 65 years and the factors contributing to the exacerbation of glucose tolerance with aging among older adults with no history of diabetes. The present study showed that the prevalence of newly diagnosed diabetes was higher in the older group. The Matsuda index and disposition index decreased with age, suggesting that the increased prevalence may be due to decreased insulin sensitivity and β-cell function. In addition, multiple regression revealed that both the Matsuda index and disposition index were strongly correlated with VFA and FFA; thus, FFA level in blood and its source could be a determinant.
Previous studies have reported that lower insulin secretion index and impaired insulin sensitivity in older adults as compared with younger individuals contribute to impaired glucose tolerance in older adults [ 4 , 6 ]. However, it remains unclear whether these trends are further exacerbated by aging beyond age 65. Our findings suggested that the ability to secrete insulin in response to blood glucose levels, reflected by the insulinogenic index and AUC-insulin/AUC-glucose, does not worsen after the age of 65 years, but insulin sensitivity (Matsuda index) and disposition index showed age-related declines. Our results differ from a previous report that demonstrated that the insulinogenic index of older adults was lower than younger adults [ 4 ]. Taken together, these data indicate that the age-related decrease in the insulinogenic index may be slower in older adults. However, it could also be possible that these indices are affected by features of present study subjects. Therefore, further study is needed in this regard. The disposition index is useful for assessing an individual's ability to compensate for changes in insulin secretion by adjusting insulin sensitivity; it is considered to be more appropriate to evaluate the accuracy of β-cell function [ 15 ]. In fact, as shown in Fig. 1 , only individuals with normal glucose tolerance were able to secrete insulin to compensate for age-related insulin resistance, while individuals with prediabetes or type 2 diabetes had more insulin resistance but failed to secrete insulin to compensate for it.
In the present study, we observed an exacerbation of insulin resistance with increasing age, which was positively correlated with increasing SFA, VFA, and FFA but negatively correlated with adiponectin. Similarly, it has been suggested that increased FFA and Adipo-IR and decreased adiponectin occur with increased body fat, both of which promote ectopic fat accumulation, leading to insulin resistance and metabolic disorders [ 23 , 24 ]. In addition, it has also been suggested that SFA and VFA are major sources of whole-body FFA release [ 25-28 ]. In the present study, FFA is only weakly correlated with VFA ( r = 0.06, P = .024). The results of multiple regression analysis also suggest that FFA is a factor independent of VFA in the Matsuda index. In addition, FFA showed a significant positive correlation with VFA only in the 65 to 69 age group, but not in the groups aged over 70 years (data were not shown). On the other hand, a significant negative correlation was found between FFA and ASM, the latter is inversely related to total body fat, in the age groups over 70 years (data were not shown). This suggests that total body fat mass, rather than VFA, may be a major determinant of FFA in older adults. There was little change in adiponectin levels or SFA, although there was an increase in VFA and FFA with aging. These results suggest that age-related increases in VFA and FFA might be responsible for the exacerbation of insulin resistance with aging. On the other hand, VFA and adiponectin showed a significant negative correlation ( r = −0.42, P < .001). This suggests that adiponectin is not a factor that declines with age, although it does affect insulin resistance.
Multiple regression revealed that FFA is the most relevant variable for the disposition index. When FFA concentrations are chronically elevated, a variety of changes occur in β cells, such as increased endoplasmic reticulum stress, increased oxidative stress, more inflammation, and increased autophagy flux [ 29 ]. Although these changes are compensatory mechanisms necessary for β-cell survival, in β cells of individuals susceptible to diabetes, these stresses might result in decreased insulin secretion and increased apoptosis, leading to impaired glucose tolerance.
Decrease in skeletal muscle mass [ 30 ] and strength [ 31 , 32 ] is associated with insulin resistance in older adults. While we observed a simple correlation between skeletal muscle mass, the largest uptake organ of glucose, and both the Matsuda index and disposition index, multiple regression highlighted body fat mass, rather than skeletal muscle mass, as a significant contributor to the Matsuda index and disposition index. This suggests that a reduction in skeletal muscle mass might exert a less direct influence on insulin sensitivity or β-cell function in older adults. Given the inverse relationship between body fat mass and skeletal muscle mass, skeletal muscle mass might primarily serve as a confounding factor in this context. Similarly, handgrip strength showed no correlation with Matsuda index. Although it showed a single correlation with disposition index, the results of multiple regression analysis showed no independent association in this study. On the other hand, low handgrip strength is associated with insulin resistance in patients with type 2 diabetes [ 33 ], and low handgrip strength has been shown to be an independent risk factor for the development of type 2 diabetes [ 34 ]. These data suggest that while low handgrip strength is a risk factor for type 2 diabetes, low handgrip strength is not linked to insulin resistance before the onset of diabetes.
In terms of inflammation pathways, chronic inflammation markers such as high-sensitivity CRP, IL-6, and TNF-α have been shown to be elevated prior to the onset of diabetes [ 35 ]. On the other hand, a prospective study investigating the risk of developing type 2 diabetes reported no significant association between high-sensitivity CRP and the development of diabetes when adjusted for other factors such as BMI [ 36 ]. The association between CRP and insulin resistance may be an epiphenomenon of obesity or adiposity, rather than an independent factor, and, in fact, a study of Korean subjects showed that slight weight gain (BMI ≥23 kg/m 2 ) was a greater risk for insulin resistance than high CRP levels [ 37 ]. Thus, in this study, we included CRP as a factor to adjust multiple regression analysis for the Matsuda index and disposition index ( Table 4 ), although CRP and Matsuda index and disposition index did not show independent associations in this study.
Postprandial hyperglycemia is a common characteristic of glucose intolerance in older adults [ 38 ]. In the present study, although HbA1c, AUC-glucose, and AUC-insulin increased with age, there were no differences in fasting glucose or insulin levels among the groups. The incidence of type 2 diabetes is significantly higher in older adults than in younger people. Periodic and appropriate screening is essential for early diagnosis and proper treatment. Thus, assessing postprandial glucose levels, rather than fasting levels, appears to be more important in screening for type 2 diabetes in older adults.
There are several limitations in this study. First, because of the cross-sectional design, it was not possible to track changes in insulin secretion or insulin resistance over time for each individual. Further observational studies are intended to clarify these issues. Second, as age increased, participants might have been healthier, potentially introducing a survival bias. The group aged over 80 years had the highest energy intake, which might be related to this bias. If so, this high-calorie, high-carbohydrate diet may contribute to the deterioration of glucose tolerance. Thus, the older the age group, the more caution should be exercised in interpreting the results. Third, because the study population consisted of older adults who were living in central Tokyo and had a higher level of education, caution is required when generalizing our findings to other populations. Finally, since participants younger than 65 years were not included, these results are not applicable to those under 65 years of age.
In conclusion, this study revealed that glucose tolerance declines with age in older Japanese adults aged 65 years or older, potentially due to insulin resistance and decreased β-cell function, at least in part caused by age-related fat accumulation and elevated FFA levels. To address these problems and prevent the pandemic of new-onset diabetes in older adults, improvements in body composition through appropriate diet and exercise might be effective to counter exacerbation of glucose tolerance, even in older adults. In addition, regular measurement of body composition and postprandial blood glucose levels, as well as body weight and fasting blood glucose levels, may be useful in early identification of these changes. | Abstract
Context
Older adults have a high prevalence of new-onset diabetes, often attributed to age-related decreases in insulin sensitivity and secretion. It remains unclear whether both insulin sensitivity and secretion continue to deteriorate after age 65.
Objective
To investigate the effects of aging on glucose metabolism after age 65 and to identify its determinants.
Methods
This cross-sectional study involved 1438 Japanese older adults without diabetes. All participants underwent a 75-g oral glucose tolerance test (OGTT). Body composition and fat distribution were measured with dual-energy X-ray absorptiometry and magnetic resonance imaging. Participants were divided into 4 groups by age (65-69, 70-74, 75-79, and 80-84 years) to compare differences in metabolic parameters.
Results
Mean age and body mass index were 73.0 ± 5.4 years and 22.7 ± 3.0 kg/m 2 . The prevalence of newly diagnosed diabetes increased with age. Fasting glucose, fasting insulin, the area under the curve (AUC)-insulin/AUC-glucose and insulinogenic index were comparable between groups. AUC-glucose and AUC-insulin during OGTT were significantly higher and Matsuda index and disposition index (Matsuda index · AUC-insulin/AUC-glucose) were significantly lower in the age 80-84 group than in the age 65-69 group. Age-related fat accumulation, particularly increased visceral fat area (VFA), and elevated free fatty acid (FFA) levels were observed. Multiple regression revealed strong correlations of both Matsuda index and disposition index with VFA and FFA.
Conclusion
Glucose tolerance declined with age in Japanese older adults, possibly due to age-related insulin resistance and β-cell deterioration associated with fat accumulation and elevated FFA levels. | The high incidence of new-onset diabetes in older adults is a well-known and pressing issue, especially in countries like Japan, which has the highest population aging rate in the world [ 1 ]. Older adults have reduced β-cell function and increased insulin resistance, leading to a higher prevalence of diabetes [ 2-7 ]. For example, a large study of Chinese adults reported a significant reduction in early-phase insulin secretion, as measured with the insulinogenic index, in older individuals (>60 years) compared with younger (20-39 years) and middle-aged (40-59 years) counterparts [ 4 ]. Furthermore, insulin sensitivity assessed through the meal and intravenous glucose tolerance test was lower in older adults than in young adults [ 6 ]. Despite these findings, it remains uncertain whether these factors deteriorate further after age 65, potentially exacerbating glucose tolerance. If so, it also remains uncertain what causes the deterioration in insulin sensitivity and β-cell function. Reduced muscle mass and increased body adiposity associated with aging could contribute to the deterioration in insulin sensitivity [ 8 , 9 ]. In addition, chronic exposure to high concentrations of free fatty acids (FFAs) could be toxic for β cells [ 10 ]. Unraveling this mechanism could contribute to establishing an effective strategy for the prevention of new-onset diabetes mellitus in older adults.
The purpose of this study was to elucidate the effects of aging on glucose metabolism in older adults after age 65 and to identify its determinants. In this study, we assessed glucose tolerance, insulin secretion, and insulin sensitivity in community-dwelling older adults without a history of diabetes using a 75-g oral glucose tolerance test (OGTT). | Acknowledgments
The authors would like to thank Liu L., Aoki T., Nakagata T., Hui H., and all staff members who contributed to data collection at the Sportology Center.
Funding
This study was supported by Strategic Research Foundation at Private Universities (S1411006) and KAKENHI (18H03184) grants from the Ministry of Education, Culture, Sports, Science and Technology of Japan; Mizuno Sports Promotion Foundation; and the Mitsui Life Social Welfare Foundation.
Author Contributions
H.N., H.K., Y.S., and Y.T. performed the research and contributed to study design, data collection, interpretation of results, and writing and editing of the manuscript. S.K., H.T., M.S., N.Y., D.S., and S.K. participated in data collection and analysis and contributed to the discussion. Y.N. and R.K. contributed to the discussion. H.W. contributed to the study design and reviewed and edited the manuscript.
Disclosures
The authors have nothing to disclose.
Data Availability
Some or all datasets generated during and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request.
Clinical Trial Registration
The study protocol was approved by the ethics committee of Juntendo University in November 2015 (Nos. 2015078, and M15-0057).
Abbreviations
adipose tissue insulin resistance index
appendicular skeletal muscle mass
area under the curve
C-reactive protein
dual-energy x-ray absorptiometry
free fatty acid
fasting plasma glucose
glycated hemoglobin (hemoglobin A1c)
homeostatic model assessment for insulin resistance
magnetic resonance imaging
normal glucose tolerance
oral glucose tolerance test
subcutaneous fat area
skeletal muscle mass index
visceral fat area | CC BY | no | 2024-01-16 23:36:47 | J Endocr Soc. 2023 Dec 20; 8(2):bvad164 | oa_package/32/ed/PMC10768880.tar.gz |
||
PMC10769316 | 38187745 | INTRODUCTION
In the visual cortex of mammals, the precise wiring of geniculocortical afferents and intrinsic connections that forms the organizational basis for the systems of ocular dominance (OD) and orientation selectivity develops from, and is maintained by, a fine balance of activity-based neuronal interactions ( Katz and Shatz, 1996 ; Crair, 1999 ).
The anatomical and functional plastic reorganization of these systems following altered visual experience during the critical period also relies on the same activity-dependent competitive mechanisms. For example, in kittens undergoing monocular deprivation (MD) during the critical period, the weak, less active eye loses its connections and the capability to drive cortical neurons, while the connections of the more active eye are expanded along with its functional control of cortical territory ( Wiesel, 1982 ). The central tenet of activity-dependent competition in the OD system envisions that geniculocortical afferents serving each eye compete for a limited amount of trophic messenger molecules supplied by the postsynaptic cortical neuron. In addition, the hypothesis suggests that the more active geniculocortical connections have an increased requirement for these molecules and capture them to the detriment of the less active connections, which consequently retract. A growing set of observations strongly indicates that in the visual cortex, activity-dependent competition between geniculocortical afferents involves families of neurotrophins and their high affinity tyrosine kinase (trk) receptors ( Dominici, et al., 1991 ; Carmignoto et al., 1993 ; Riddle et al., 1995 ,1997). Indeed, if a specific neurotrophin is the limiting factor, then either local application of an excessive dose of this neurotrophin or removal of the endogenous neurotrophin should block competitive interactions among afferents.
In a physiological study using electrophysiological and optical imaging techniques in kittens during the critical period, Gillespie et al. (2000) examined the functional effects of local infusion of the trkB ligand NT-4/5 into the primary visual cortex of monocularly deprived kittens. The effects of NT-4/5 were dramatic in that OD plasticity was completely abolished—responses through the deprived eye were not lost, and the majority of cells were binocularly activated. Furthermore, orientation selectivity was lost through both eyes, although general neuronal responsiveness suffered only a mild reduction. Taken together, these observations lead to the hypothesis that large amounts of NT-4/5 prevail over competitive mechanisms driven by the difference in activity between the deprived and non deprived eyes and cause overgrowth and sprouting of thalamocortical and/or corticocortical connections. Sprouting of afferent terminals and maintenance of exuberant, nonselective connections are also consistent with the loss of orientation selectivity, a modality that requires fine tuning and specificity of circuits.
Anatomical studies are consistent with this interpretation. Local infusion of massive doses of these neurotrophins via osmotic minipumps causes the desegregation of already formed OD columns in layer 4, presumably by overgrowth of geniculate terminals ( Cabelli et al., 1995 , 1997 ). Desegregation of OD columns also followed the presumed removal of endogenous neurotrophins from the visual cortex by infusion of trk-IgG fusion proteins. In addition, local infusion of another trkB ligand, brain derived neurotrophic factor (BDNF), via osmotic minipumps induces a specific sprouting in layer 4 of both deprived and non-deprived geniculate afferents in MD animals and causes sprouting of these afferents in normal animals ( Hata et al., 2000 ). Finally, delivery of NT-4/5-coated beads into the visual cortex prevents the shrinkage of geniculate somata following MD ( Riddle et al., 1995 ).
These anatomical studies have focused on the effects of neurotrophins on the development and plasticity of the geniculocortical afferents. In this study, we sought to explore the effect of infused NT-4/5 on the morphology of the postsynaptic neurons that receive the plethora of afferent connections.
Experimental evidence shows that neurotrophins have a preeminent role in growth and regulation of dendritic arbors of cortical neurons, with each neurotrophin eliciting both a specific pattern of dendritic changes and a modulatory effect on the action on other neurotrophins ( Ruit and Snider, 1991 ; McAllister et al., 1995 , 1997 ; Baker et al, 1998 ). However, these elegant anatomical works have focused on the early development of neurons in organotypic slices of the developing cortex in which the afferent system has been removed. In this work, we have studied neuronal morphology of an intact visual cortex infused with NT-4/5 in vivo.
Visual cortical neurons were labeled using the DiOlistics methods. We found that NT-4/5 causes in most cortical layers an excessive sprouting of spine-like processes both on dendrites and on neuronal somata. | MATERIAL AND METHODS
Experiments were performed on 5 kittens (K349, K352, K359, K362 and K396) born and housed with their mother in the University of California, San Francisco cat colony. All procedures were approved by the Committee on Animal Research (University of California, San Francisco) in accordance with National Institute of Health guidelines on the Use of animals in Neuroscience Research. All efforts were made to minimize the number of animals used in this study. Neurotrophin NT-4/5 was delivered to the visual cortex through an osmotic minipump from postnatal day (P) 24 to P36, from P28 to P35, from P28 to P34, from P26 to P31 and from P25 to P29, respectively.
Implantation of osmotic minipump.
Alzet osmotic minipumps, models 2001 or 1002 (Alza, Palo Alto, CA), delivering at a rate of 1ml/hr or 0.5ml/hr respectively, were filled in sterile conditions with a solution containing 0.2mg/ml of NT-4/5 in 140mM Na-acetate, 1% Bovine Serum Albumin (BSA; Sigma, St. Louis, MO) in sodium phosphate buffer saline (PBS, 0.01M). NT-4/5 was kindly provided by Genentech Inc. (South San Francisco, CA) at a concentration of 0.6mg/ml and was diluted to match the concentration used by Gillespie et al. (2000) in a study on the physiological effects of infused NT-4/5 on the developing visual cortex. The surgical implantation of the minipumps was carried out in sterile conditions following a protocol routinely used in the laboratory. Briefly, the animal was initially anesthetized with Ketamine (10-20mg/Kg; Abbott, North Chicago, IL) supplemented with Isoflurane (1-5%) in Oxygen (1.5%) delivered through an endotracheal tube. In order to prevent gag reflexes, the insertion of the endotracheal tube was facilitated by anesthetizing the laryngeal area with 1% solution of xylocaine. The animal breathed air spontaneously; expired CO2 was monitored with a capnograph (Ohmeda, Louisville, CO). A solution of 2.5% dextrose in Lactate Ringer was delivered intravenously at a rate of 25 ml/hr. The animal was placed on a stereotaxic apparatus; the eyes were protected with an ophthalmic ointment. The minipumps were implanted bilaterally into the visual cortex. Each minipump was attached to a 30-gauge needle that was inserted in the skull through a small drilled hole and reached the visual cortex at the Horseley and Clark stereotaxic coordinates AP 0.0/−2.0 and ML 2.0. The needle was then lowered with a micromanipulator to a depth of 2mm from the surface of the brain and held in place by dental cement. The minipump itself was housed in a pocket formed in the nape of the neck. At the end of the surgery the animal was returned to the mother and had normal visual experience.
Perfusion and Histological procedures.
Four to 12 days after the minipump implantation, the animal was deeply anesthetized with an overdose of sodium thiopental (Nembutal, Abbott, 150mg/Kg), perfused transcardially with PBS followed by 2% paraformaldehyde in PBS. The brain was quickly removed from the skull and overfixed for two hours in the same fixative. A coronal cut was performed in both hemispheres in the plane of entrance of the minipump needle, dividing the hemispheres in 4 blocks: right anterior, right posterior, left anterior and left posterior. The cortical blocks were severed from subcortical structures, embedded in 4% agar and cut at the vibratome in the coronal plane. Section thickness varied according to the histological procedure and will be specified in the appropriate sections.
DiOlistics.
Animals K349, K352 and K359 were used for this experiment. The 4 brain blocks were cut coronally in repeating series of one section 200-250 μm thick followed by two sections 70 μm thick. Sections were collected in individual wells containing 0.1M PBS and 0.05% Thimerosal (PBS-t). Each section was identified by its distance from the site of entrance of the minipump needle (i.e. from the center of NT-4/5 infusion) along the anteroposterior coordinates. The 200μm sections were used in DiOlistics experiments for neuronal labeling, while one series of 70 μm sections was used for NT-4/5 immunohistochemistry to evaluate the spread of the infused neurotrophin. This procedure allowed the assignment of neuronal elements to areas affected or non-affected by the infused NT-4/5.
In the 200-250 μm sections series cortical neurons were labeled with lipophylic dyes (DiI −1,1-dioctadecyl-3,3,3,3-tetramethylindocarbocyanine; DiO- 3,3-dioctadecyloxacarbocyanine perchlorate; DiD- 1,1 dioctadecyl 3,3,3,3 tetrametylindocarbocyanine perchlorate; Molecular Probes, Eugene, OR) of different emission wavelengths (488nm, 568nm and 647nm, respectively) using the DiOlistics procedure described by Gan et al., (2000) . Briefly, 0.7 μm or 1.7 μm tungsten particles coated with one of three lipophylic dyes or with 3 different combinations of 2 lipophylic dyes (DiI+DiO, DiI+DiD, DiO+DiD) were shot by means of a helium gene gun (Helios Gene Gun System, Cat# 165-2431, Bio-Rad, CA) at 180-200psi onto cortical sections. Sections were protected from an excessive pressure wave by interposing a filter between the section and the gene gun (3μm pore size, 8x105 pores/cm 2 ; Millipore). The filter also prevented large clusters of dyes from landing on the sections. No more than six sections were processed at one time and were kept in PBS-t. They were then observed and analyzed in laser-scanning confocal microscopy (Biorad, CA) by mounting them on a glass slide using an antifade mounting medium such as Vectashield (Vector, Burlingame, CA) or preferably Slow Fade (Molecular Probes, Eugene, OR) that does not contain glycerol and does not affect membrane integrity. Sections on glass slides were framed with thin strips of parafilm that provided a cushion between the slide and the coverslip and avoided deformation of the sections.
Confocal images were collected as z-stacks of 2-dimensional images (1280x1024 pixels) scrolling the depth (z-axis) of the coronal section, through the labeled element. The interval between images of the stack was 1 μm when an entire neuron was imaged at a magnification of 0.57-0.20 μm/pixel, or 0.4 μm intervals when the aim was to count dendritic spines at a magnification of 0.15-0.10 μm/pixel. The position of every neuronal element was carefully plotted and recorded onto the adjacent sections stained for NT-4/5 immunohistochemistry (see Fig. 2 ). At the end of the analysis, sections were transferred back to PBS-t.
Neuronal reconstruction and spine analysis were performed using the Confocal module of Neurolucida software and NeuroExplorer software (both from Microbrightfield, Colchester, VT). Statistical significance was assessed with the non parametric test Mann-Whitney U-test. Projection of 3-D stack confocal images in 2-D images was performed with Confocal Assistant software (by Todd Clark Brelje) available on-line. Anatomical figures were prepared by transferring Confocal Assistant images to PhotoShop (Adobe, San Jose, CA).
NT-4/5 immunohistochemistry.
Immunohistochemistry for NT-4/5 was performed in all animals and started on the same day of perfusion. A series of 70 μm sections (or 100 μm sections for K362) was washed in PBS and transferred in a blocking solution containing 2.5% BSA, 3-5% Normal Horse or Normal Goat Serum and 0.3% of Triton-X in PBS-t for 2hrs to avoid non-specific labeling. Sections were then incubated for 24-48hrs in the blocking solution containing chicken anti-human NT-4/5 antibody (10 mg/ml, Promega, Madison, WI) or rabbit anti-human NT-4/5 (1:1000, Chemicon International, Temecula, CA). The sections were subsequently rinsed 3x10min in PBS-t, incubated overnight in horse anti-chicken or goat anti-rabbit biotinylated secondary antibody (1:200; Vector Laboratories, Burlingame, CA) in blocking solution, rinsed 3x10 min in PBS-t and finally transferred to a solution of Cy3 -conjugated avidin (1:200; Jackson Immuno Research Laboratories, West Grove, PA) in PBS for 5-12 hours. Sections were analyzed in fluorescent microscopy and photographed using a digital camera (Spot software).
Synaptophysin immunohistochemistry.
In two animals (K362 and K396), NT-4/5 was infused bilaterally for 5 days and 4 days, respectively. After perfusion, the four cortical blocks were cut at the vibratome in a series of 100 μm and 70 μm sections, respectively. One series was processed for NT-4/5 immunohistochemistry to evaluate NT-4/5 spread, as described above. Another series of sections was processed for synaptophysin fluorescent immunohistochemistry. In K362 the membrane permeant Triton-X was excluded from synaptophysin immunohistochemistry in order to combine this reaction with DiOlistics methods (data not shown). Triton-X (0.3%) was instead added to all solutions in K396). Sections were incubated in a blocking solution containing 2.5% BSA, 3% Normal donkey serum in PBS-t, then transferred for 48 hrs in a solution of mouse anti-synaptophysin (1:20, Roche Applied Science, Indianapolis, IN). Sections were then washed in PBS-t 3x10 min, incubated overnight with biotinylated anti-mouse secondary antibody (1:200, Vector Laboratories) and finally transferred to a solution of Cy2 -conjugated avidin (1:200; Jackson Immuno Research Laboratories) in PBS for 5-12 hours. After several washes in PBS-t sections were analyzed in confocal microscopy. To assess the general density of synaptophysin immunoreactivity, images were collected as single 2D-images with the filter suitable for Cy2 emission (510nm), using the same parameters of laser beam power, gain, iris and black levels for all sections. These analyses were also performed in the same day to minimize random changes in the power of the confocal microscope laser. For every section, special care was taken to collect the confocal image from the level along the z-axis where the immunostaining was the brightest. To quantify the density of synaptophysin immunostaining, we measured the average brightness of pixels from confocal images analyzed as TIFF files using IDL software (Research Systems Inc., Boulder, CO). ). For each image, the area occupied by synaptophysin immunostaining was calculated by subtracting from the total area, regions occupied by somata, blood vessels and thick dendrites that appeared black. Mann-Whitney U-test was used for statistical comparisons.
GAD-65 immunohistochemistry.
In one animal (K396) a series of 70 μm sections was processed for GAD-65 immunofluorescence according to the method described by Silver and Stryker (1999) . Briefly, floating sections were incubated for 2 hours in blocking solution consisting of 0.1M PBS-t, 2.5% BSA, 0.5% Triton X-100, 3% Normal Rabbit Serum (Sigma). Sections were then incubated for 48hrs in the blocking solution containing 1:1000 rabbit anti-GAD-65 (Chemicon, Temecula, CA.). Sections were subsequently rinsed 3x10min in PBS-t, incubated overnight at 4°C in biotinylated goat anti-rabbit secondary antibody (1:200; Sigma) in blocking solution, rinsed 3x10 min in PBS-t and finally incubated overnight at 4°C. in a solution of Cy2 -conjugated avidin (1:200; Jackson Immuno Research Laboratory) in PBS-t. After a final series of three washes in PBS-t for 10 min each, sections were mounted on gelatinized microscope slides and coverslipped using as mounting medium an anti-fade solution (Vectashield; Vector). Confocal images and analysis of GAD-65 staining density were performed using the same criteria described above for synaptophysin staining density. | RESULTS
The Results are presented in three sections. In the first section, the experimental paradigm will be presented in detail. The second section describes the NT-4/5-mediated structural changes in visual cortical neurons. Finally, we analyze the pattern of immunostaining of a presynaptic marker, synaptophysin, in NT-4/5 positive and negative regions, and relate this axonal feature to the structural plasticity observed in dendrites in NT-4/5 affected areas.
Localization of DiOlistically labeled neuronal elements relative to the infused NT-4/5
The aim of the study was to investigate possible plastic effects of neurotrophin NT-4/5 on the anatomical structure of cortical neurons. To this end, the neurotrophin was continuously infused for 4-12 days in the visual cortex of kittens during the 3rd and 4th weeks of age. No difference in the physiological effects were seen with infusions within the 4-12 day range ( Gillespie et al., 2000 ). Cortical neurons were labeled with fluorescent dyes using the DiOlistics method ( Gan et al., 2000 ). Fixed, 200-250μm-thick coronal sections through the lateral gyrus were shot with dye-coated particles. The particles are blown at random onto the sections and label cellular elements that come into their contact. The lipophylic dye then spreads passively along the membranes and outlines membrane elements up to their finest processes. Confocal images through successive thin planes of a labeled neuronal element reveal its anatomical features with astonishing detail ( Fig. 1 ).
We collected neurons from the crown and the medial bank of the lateral gyrus, where areas 17 and 18 are located. Generally sections were shot with few particles to avoid a high density of labeling, particularly of axons of passage, and the consequent uncertainties in sorting out individual neurons for reconstruction. However, the use of tungsten particles coated with three different lipophylic dyes and with two by two combinations of these dyes allowed labeling neurons with a wide range of colors ( Fig.1D ) and facilitated the distinction between contiguous neuronal elements. Our initial plan was to serially reconstruct whole neurons in all cortical layers. However, labeled neurons were often at the very surface of the section due to the fact that tungsten particles did not penetrate deeper than 50-100μm in the tissue and clear confocal images were obtained only within 50-60μm from the top of the sample. Therefore, neurons under investigation often had cut dendrites, making complete reconstruction impossible. Qualitatively, we observed that close to the NT-4/5 infusion, the proximal segment of dendrites was unusually rich in spines. Therefore we shifted our attention to spine density measurements in both apical and basal dendrites of well-identified order.
In order to assess whether reconstructed elements (neurons or dendritic segments) were located inside or outside NT-4/5 affected areas, their locations were carefully mapped in relation to blood vessels and transposed onto the adjacent 70μm-thick cortical section stained for NT-4/5 immunofluorescence. Immunopositive neuronal somata and dendrites were found in all layers. Figure 2F shows the general features of NT-4/5 immunoreactivity in cortical layers 4-6. At higher magnification it was possible to observe a dense staining of fibers (not shown). Cells in the white matter were also NT-4/5 immunopositive ( Fig. 2G ); these cells could have the typical pyramidal shape of subplate neurons ( Fig. 2H ) or could have a non-pyramidal morphology ( Fig. 2I ).
Figure 2 (A - D) illustrates a series of 70μm thick, NT-4/5 immunostained coronal sections through the visual cortex of kitten K359 at four different levels posterior to the center of NT-4/5 infusion (see Methods ). At a distance of 1000μm from the needle connected to the osmotic minipump, NT-4/5 is present throughout all cortical layers and white matter of the crown of the lateral gyrus. With increasing distance from the center of the infusion, the NT-4/5 labeled area becomes more restricted to the white matter. In Figure 2E , the position of neurons and dendrites imaged from the 200μm-thick cortical section adjacent to section A, has been transferred onto section A. In this case, all the neuronal elements are considered to be included within the NT-4/5 positive area. We considered that the reconstructed elements were located outside the neurotrophin-affected area if the adjacent section was completely NT-4/5 negative. Data obtained from these regions are considered control, since on the basis of physiological experiments on animals implanted with a NT-4/5-filled osmotic minipump, cortical regions outside the infused area have fully normal visual responses ( Gillespie et al. 2000 ). Often, reconstructed neuronal elements were located in cortical layers free from NT-4/5 staining, while, in the same section, the underlying white matter was NT-4/5 positive. In this case neurons could have still be affected by the neurotrophin, albeit indirectly, and we classified them in a separate category (elements in border regions). The spread of NT-4/5 along the anteroposterior axis was variable. Complete labeling of cortical layers and the white matter of the entire lateral gyrus occurred within 1-2mm from the center of the infusion cannula; labeling extended in the white matter alone for another 1-1.5mm. Generally, no NT-4/5 immunostaining was found beyond 4mm from the injection site. The labeling pattern indicated that a very large expanse of area 17, 18 and possibly 19 was influenced by the infused neurotrophin.
NT-4/5 causes proliferation of processes in visual cortical neurons
The most interesting feature found in NT-4/5 positive areas is the presence of many cortical neurons whose somata were covered with an elaborated crown of fine processes, recalling dendritic spines. Rarely were these somata found in NT-4/5 immunonegative areas: 16 out of 44 neurons (36.4%) analyzed in NT-4/5 infused areas bore processes on somata, whereas only 2 out of 28 such neurons (7.2%) were found in NT-4/5-free areas. The somatic processes could vary in shape and size: from short, stocky offshoots resembling spines to long and slender appendages similar to filopodia seen in cell cultures ( Dailey and Smith, 1996 ). Neurons with overgrown processes could be found in cortical layers 2-6. The vast majority were pyramidal in shape; a few were also stellate cells in layer 4; however because of their small number they have not been included in the analysis.
Figure 3A illustrates examples of somata showing intricate surfaces. In contrast, Figure 3B shows the smooth membrane of normal neurons characteristic of NT-4/5 negative areas, but also present in smaller number in NT-4/5 positive areas.
Although more conspicuous on somata, the exuberant sprouting of processes appeared to involve also dendrites. Qualitatively, we noticed very spiny dendrites in all layers, however this overgrowth was more apparent on the initial segment of apical dendrites, which are typically rather smooth.
Quantification of spine density confirmed this observation. Spines were serially mapped in confocal microscopy at 0.4 μm intervals. Spines were drawn along a portion of a dendrite and spine density was evaluated using NeuroExplorer. Dendrites were selected only if their origin (for first order dendrites) or the order of their parent branch (for second and third order dendrites) could be unmistakably identified. Data were collected mainly from neurons in layers 2/3 that are relatively small and could be easily labeled by diffusion of the lipophylic dye when a tungsten particle hit any portion of their soma or dendritic arbor. However, the amount of dye coating the tungsten particles appeared to be insufficient to diffuse throughout large membrane surfaces. Large neurons like the pyramidal cells in layers 5 and 6 were rarely labeled from the soma up to the second or third order dendrites with enough clarity to allow spine counting.
Figures 4A , B , C are plots of spine density (number of spines/μm) on apical and basal dendrites (primary, second and third order) of neurons in layers 2/3, 4 and 5/6. Neurons are divided according to their location in NT-4/5 positive areas, NT-4/5 negative areas or in border regions (see above). However for the statistical analysis (Mann-Whitney U-test) data from neurons located in NT-4/5 positive and border regions have been pooled since the two populations have similar spine counts distributions. In apical dendrites of layers 2/3 ( Figure 4A , top) spine density in primary and second order dendrites was significantly higher in NT-4/5 positive areas and in border regions compared to NT-4/5 negative areas. Only the first order basal dendrites of layers 2/3 ( Fig. 4A , bottom) had a higher spine density in NT-4/5 positive areas and in border regions compared to NT-4/5 negative areas. Scattergrams of Figures 4BC show that there was a strong tendency for first and second order basal and apical dendrites of neurons in layers 4 and 5/6 to bear a higher spine density in NT-4/5 infused regions, compared to NT-4/5-free regions. However, the difference between the two groups did not always reach significance. The cumulative analysis for apical and basal dendrites in all layers shows that the mean spine density is significantly higher in NT-4/5 affected regions (means: 0.18/μm outside NT-4/5 areas vs 0.39/μm inside NT-4/5 areas; p<0.0001).
Figure 5 illustrates the percent increase in the mean density of spine-like processes between neurons inside and outside the NT-4/5-infused area. The Figure clarifies that in both apical and basal dendrites of layers 2/3 and 4 neurons, the increase in the mean density of spine-like processes induced by the NT-4/5 was higher in the first than in second order dendrites. Basal dendrites of neurons in layers 5/6 show a similar, yet mild, effect. In summary, profuse sprouting was induced by NT-4/5 infusion on the neuronal membrane of somata and on proximal portion of dendrites.
Pattern of synaptophysin and GAD-65 immunoreactivity in NT-4/5 affected regions
An increase in dendritic protuberances resembling spines in NT-4/5 positive areas strongly suggests enhanced synaptic connections. Are the newly formed structures involved in functional synaptic contacts with presynaptic sites? If so, then presynaptic markers should increase after NT-4/5 treatment. Synaptophysin is an integral membrane glycoprotein of synaptic vesicles, and thus a marker of synaptic vesicle populations in the presynaptic terminals.
The following experiments on kitten K362 and K396 (see Methods ) were aimed at studying whether local infusion of NT-4/5 in the visual cortex was accompanied by a change in synaptophysin immunoreactivity.
In confocal microscopy, synaptophysin immunofluorescence appears as distinct, bright granules present in all cortical layers. Neuronal somata, thick dendrites and blood vessels are void of immunostaining and appear as empty, black areas ( Figs. 6A and 7 ). In some neurons, a very pale diffuse immunoprecipitate is also present in the cytoplasm (see Fig. 7 ). Single 2D-images of synaptophysin immunostaining were collected at random from coronal sections located along the whole antero-posterior axis of the visual cortex. Images were taken from the dorsal part of the medial bank of the lateral gyrus, where area 17 is located. In every section, special care was taken to collect the confocal image from the level along the z-axis where the immunostaining was the brightest. Images were then classified as inside or outside regions in which NT-4/5 was infused by comparison with an adjacent section stained for NT-4/5 immunoreactivity. We discarded images taken from border regions (see above). For each section we measured the average brightness of pixels over the imaged area and the average pixel brightness at 90th and 95th percentile. The density of synaptophysin immunostaining was significantly higher in visual cortical regions positive for NT-4/5 ( Table 1 ). This evidence strongly suggests that the neurotrophin had caused a local increase in the number of synapses.
In regions of the visual cortex bathed by the infused NT-4/5 (i.e. regions corresponding to NT-4/5 immunopositive areas in adjacent sections), we often observed around the somata a distinct, dense crown of intensely labeled synaptophysin granules ( Fig. 7 ). This feature was not found in NT-4/5-free areas. This peculiar aspect of distribution of the presynaptic marker synaptophysin matches the localization of spine-like processes on somatic profiles and suggests that these latter might be sites of synaptic activity.
Experiments on transgenic mice overexpressing BDNF ( Huang et al., 1999 ) or lacking trkB receptors ( Rico et al., 2002 ) have demonstated the crucial role of this receptor in the development and maturation of inhibitory neurons. We investigated whether an excess of NT-4/5 in the developing kitten visual cortex was also accompanied by a modulation of inhibition, as measured by the density of GAD-65, a marker for inhibitory synapses. GAD-65 is an isoform of the GABA synthetic enzyme glutamic acid decarboxylase and is localized on the presynaptic terminals of GABAergic inhibitory neurons ( Kaufman et al., 1991 ). Confocal images of GAD-65 immunostained sections were collected from the dorsal part of the lateral gyrus, where the visual cortex is located. Images were classified as inside or outside regions in which NT-4/5 was infused by comparison with an adjacent section stained for NT-4/5 immunoreactivity. Specifically, images considered outside NT-4/5 positive area were collected at a distance of 4.5mm away from NT-4/5 zone. Examples of such inside and outside fields are shown in Fig. 6B . The methods of image collection and analysis were the same as for the synaptophysin immunostaining. As shown in Table 1 , the density of GAD-65 immunostaining (mean pixel brightness and brightness at 90th and 95th percentile) was significantly higher in visual cortical regions positive for NT-4/5 than in NT-4/5 free areas. It should be noted that the increase of GAD-65 immunostaining over background was still present even 1 mm beyond the NT-4/5 immunopositive area (p < 0.02 at each of the three intensity levels measured). This finding suggests that the effects of the neurotrophin extend for some distance beyond the region in which the NT-4/5 can be detected immunohistochemically. | DISCUSSION
The present study demonstrates that excess NT-4/5 administered in vivo into the visual cortex induces a remarkable sprouting of processes from both proximal dendrites and somata of cortical neurons. The morphology of these newly formed processes closely resembles spines, although at times they appear unusually long, resembling filopodia ( Figs. 1 and 3 ). We provide evidence that these processes might bear functional postsynaptic sites by correlating the increase in their density with the great enhancement of the presynaptic marker synaptophysin and GAD-65 at sites of NT-4/5 infusion. Such an enhancement of connectivity would be consistent with the inference from physiological studies that NT-4/5 infusion promotes promiscuous connections in developing visual cortex ( Gillespie et al., 2000 ).
The presence of protuberances on neuronal somata has been observed in vitro ( Tauer et al., 1996 ) and in non-mammalian species during early development ( Jhaveri and Morest, 1982 ). In the visual cortex, we occasionally observed spine-laden somata in adult mice monocularly enucleated from P10 (Antonini, Wong and Stryker, unpublished observations).
The effect of NT-4/5 in determining new growth of dendritic and somatic processes on a cortical neuron is consistent with a retrograde action of the excess neurotrophin on the afferents to that neuron, causing growth and elaboration of their terminals. The increased presynaptic load might then increase the demand for postsynaptic sites, leading to the proliferation of spine-like processes on the postsynaptic membrane. Conversely, the effect of NT-4/5 may be explained with a direct effect of the neurotrophin on cortical neurons morphology with dendritic expansion and growth of spines ( McAllister et al., 1995 ).
The retrograde hypothesis.
The role of NT-4/5 in the elaboration of neuronal afferents derives from the hypothesis that in the physiological condition neurotrophins be secreted by neurons on the basis of correlated activity between the neuron and its presynaptic partners. Neurotrophins would act on the presynaptic terminals as retrograde, target-derived trophic factors and would favor the stabilization and growth of the synaptic pool between the pre- and postsynaptic elements. Ultimately, neurotrophins’ action would be responsible for the elaboration and plasticity of presynaptic axonal terminals ( Cohen-Cory and Fraser, 1995 ; Oppenheim, 1996 ; McAllister et al., 1999 ; Schumann, 1999 ).
In the mammalian visual system the retrograde action of neurotrophins has been related to the development and remodeling of eye-specific geniculocortical afferents forming OD columns, through a process involving competition (reviewed in Thoenen, 1995 ; Bonhoeffer, 1996 ; Shatz, 1997 ). Indeed, recent experimental evidence has suggested that in cortical layer 4 of the visual cortex, geniculocortical afferents serving each eye compete for a limited amount of a specific neurotrophin retrogradely released by the postsynaptic cortical neuron in an activity-dependent manner ( Carmignoto et al., 1993 ; Cabelli et al., 1995 ; Hata et al., 2000 ). Several lines of evidence indicate that, in the cat, the neurotrophins of choice for this task are trkB ligands. First, in the ferret’s lateral geniculate nucleus, the neurons of origin of geniculocortical connections express both trkB and trkC receptors during development ( Allendoerfer et al., 1994 ; Cabelli et al., 1996 ). Similarly, in the cat visual cortex, trkB receptors are present on geniculocortical afferents in layer 4 ( Silver and Stryker, 2001 ). Second, maintenance and plasticity of OD columns during the critical period, appear to be overridden by an excess of BDNF and NT-4/5 or by manipulations to remove endogenous neurotrophins by infusion of chimeric trkB-IgG fusion proteins (for review see Shatz, 1997 ). In fact, infusion of BDNF causes the anatomical desegregation of OD columns in normal kittens ( Cabelli et al., 1995 ; Hata et al., 2000 ). Desegregation of both deprived and non-deprived geniculocortical afferents with expansion of their terminal fields within layer 4 is also observed in MD animals ( Hata et al., 2000 ). Finally, physiological experiments show that infusion of excess NT-4/5 abolishes OD plasticity after a brief period of MD ( Gillespie et al., 2000 ). A proliferation of promiscuous connections after infusion of NT-4/5, unconfirmed by correlated activity, is also credited for the loss of orientation selectivity ( Gillespie et al., 2000 ). Orientation selectivity relies both on the precise geometry of geniculocortical innervation ( Chapman et al., 1991 ; Ferster et al., 1996 ) and on the fine tuning of intracortical circuits ( Ferster and Miller, 2000 ), and could be disrupted by the disorganized overgrowth of connections due to an excess of neurotrophin.
Our study demonstrates that the effect of NT-4/5 is not limited to layer 4, the main geniculate recipient cortical layer. Proliferation of dendritic and somatic processes was observed on neurons in all cellular layers. Since after infusion of BDNF, another trkB ligand, geniculate afferent sprouting is limited to layer 4 ( Hata et al., 2000 ), the spatially diffuse action of NT-4/5 in our study suggests a target-derived growth of additional axonal terminals beyond the geniculate afferents.
Evidence from studies on BDNF indicates that inhibitory cortical circuits may be potentiated by activation of trkB receptor ligands. Indeed, studies in vitro show that application of BDNF causes differentiation of rat neocortical cultures ( Nawa et al., 1993 ; Rutherford et al., 1997 ). In vivo BDNF infusion, or its overexpression in transgenic mice, has a trophic and maturational effect on Gaba-ergic neurons – both increasing soma size and neuropeptide expression ( Huang et al., 1999 ; Nawa et al., 1994 ; Widmer and Hefti, 1994 ). Mice that overexpress BDNF also have a precocious critical period ( Hanover et al., 1999 ), perhaps as a result of the enhancement of inhibition, since pharmacologically enhancing inhibition in normal mice can also create a precocious critical period ( Fagiolini and Hensch, 2000 ). In our study, the preferential arrangement of newly formed spines on soma and proximal dendrites suggests that a potentiation of inhibitory circuits with sprouting of inhibitory terminals could have indeed occurred.
Plasticity of intracortical circuits may be the endpoint of a cascade of growth-inducing effects triggered by the retrograde action of the neurotrophin, or may simply derive from the direct response of cortical neurons to the trophic action of NT-4/5.
Evidence for a direct effect of NT-4/5 on cortical neurons
McAllister et al. (1995 , 1997 ) have shown distinct effects of different neurotrophins on growth and elaboration of dendrites in the visual cortex. These authors tested the anatomical effect of neurotrophins on neurons in layers 4, 5 and 6 in organotypic slice cultures of ferret visual cortex approximately 4 weeks earlier in development than the animals of the present study. Layer 2/3 neurons had not yet completed migration at the age tested. NT-4/5 had a wide-spread trophic action in all layers, although basal dendrites of layer 5 and 6 pyramidal neurons, and the apical dendrite of layers 4 and 6 pyramidal neurons responded maximally to this neurotrophin, with a dramatic increase in complexity and protospine formation. These results suggest that neurotrophins act directly on the cell through trkB receptors on its membrane rather than indirectly through elaboration of specific sets of presynaptic connections. Similar results were obtained in rat visual cortex organotypic cultures transfected with plasmids encoding for NT4/5/EGFP (enhanced green fluorescent protein). After 10 days in culture, pyramidal neurons of layers II/III and VI showed a higher spine density than controls ( Wirth et al. 2003 ). Our data are consistent with a generalized effect of NT-4/5 on cortical neurons. Measurements from the entire neuronal population showed a dramatic increase of the mean spine density in NT-4/5-treated regions. The greatest effects were observed on the first order dendrites.
Yacoubian and Lo (2000) observed a similar pattern of specific proximal dendritic growth in organotypic cultures of ferret visual cortex overexpressing full-length trkB, whereas the truncated form of the receptor, T1, favored growth of more distal portions of the dendritic tree. The authors discussed this observation in the context of the developmental regulation of dendritic morphology, in view of the evidence that the expression ratio between the full-length trkB and T1 receptors is high in the early phases of development, when dendrites demonstrate the greatest growth, and is low later in development during the critical period, the age at which our experiments were carried out, and in adulthood ( Allendoerfer et al., 1994 ). Although these authors focused on dendritic complexity and not on spine density, our observations of a greater effect of NT-4/5 on the proximal portions of cortical neurons, would suggest the presence of high levels of the full-length trkB isoform as if the system had reacquired some of its early developmental potential. However, evidence for up-regulation of trkB receptors by NT-4/5 are scanty: Frank et al. (1996) report that prolonged exposure to BDNF, but not NT-4/5, causes a down-regulation of trkB function.
Horch and Katz (2002) showed that the immediate neighbors of cells caused to overexpress BDNF, and the overexpressing neurons themselves by a paracrine action of BDNF, are much more dynamic, growing and losing branches and spines much more rapidly than control neurons ( Horch et al., 1999 ). This evidence further suggests that BDNF may have a direct action on dendrites independent of presynaptic structures. Similar dynamics may also involve presynaptic partners in a developing neuronal circuit ( Jontes and Smith, 2000 ). Part of the profuse growth induced by infused NT-4/5 has most likely occurred either through synaptic activation or by a retrograde effect of the neurotrophin on the axon, since the effect was noticeable in those layers 2/3 neurons that resided at the border of NT-4/5 diffusion (gray circles in Fig. 4A ), i.e. in regions in which the soma and dendrites did not appear to be bathed in the neurotrophin, but the underlying white matter did show immunoreactivity.
Involvement of trkB in synapse regulation
The remarkable increase of protuberances after infusion of excess NT-4/5 was accompanied by a parallel increase in the presynaptic marker synaptophysin, suggesting that the newly formed spines could bear functional synapses. These findings are consistent with in vitro and in vivo studies both on peripheral and central nervous systems showing that neurotrophins are involved in the regulation of the synaptic machinery ( Snider and Lichtman, 1996 ; Whitford et al., 2002 ). Neurotrophins can alter the number of synapses ( Alsina et al., 2001 ; Nja and Purves 1978 ; Rico et al., 2002 ), levels of synaptic vesicle proteins ( Causing et al., 1997 ), synaptic vesicles density ( Martinez et al., 1998 ) and both pre- and postsynaptic efficacy ( Takei et al. 1997 ; Wang and Poo, 1997 ). It is worth mentioning that the increase of synaptophysin expression in the present experiments might not depend solely on an increase in spine density on pre-existing dendrites, but could also be dependent on new growth of spine-laden dendrites ( McAllister et al. 1995 , 1997 ; Yacoubian and Lo, 2000 ).
A contingent of the newly formed synaptic sites appears to be involved in inhibitory circuits, as suggested by the increase of GAD-65 immunostaining in NT-4/5 affected areas. This finding is in keeping with the evidence that trkB activation through another neurotrophin, BDNF, is able to modulate inhibitory circuitry in the visual cortex ( Hensch et al., 1998 ; Huang et al., 1999 ) and the cerebellum ( Rico et al., 2002 ).
The presence of filopodia in NT-4/5-infused cortex is a further indication that cortical neurons reacquire some immature traits, since these structures are thought to be present in early synaptogenesis. Their presence in young hippocampal cultures of postnatal brain, their high motility and the production of synapses on their tips and shafts, has led to the view that filopodia are transient precursors of more stable spines. Filopodia have been suggested to attract presynaptic terminals and guide them to the dendrite; possibly, they would form an excess of transient synaptic contacts that are then selected in an activity-dependent manner ( Jontes and Smith, 2000 ).
Conclusions for the role of neurotrophins in the formation of cortical circuits
The present findings provide anatomical confirmation of the suggestion made from physiological studies ( Gillespie et al., 2000 ) that NT-4/5 infusion into critical period visual cortex stimulates the formation of promiscuous neuronal connections. One interpretation of these findings is that the mechanisms for attaining specificity in neuronal connections involve the promotion of the growth of appropriate (but not inappropriate) presynaptic connections by limiting amounts of trkB neurotrophins secreted by the postsynaptic neuron, along with the growth of appropriate postsynaptic structures. In this case, the infusion experiments would have eliminated specificity by supplying sufficient neurotrophin for all inputs, including inappropriate ones.
In the case of monocular deprivation, if limiting amounts of trkB neurotrophins are the explanation for the loss of input serving the deprived eye, then a doubling of the neurotrophin level should be sufficient to preserve inputs from both eyes. The 2-3 fold overexpression of BDNF postnatally in mouse cortex does not, however, prevent the effects of monocular deprivation and make the two eyes equally effective. Instead, such overexpression of a trkB ligand results in a precocious critical period but otherwise normal plasticity ( Hanover et al., 1999 ). In this case, it appears that the excess neurotrophin has stimulated the precocious but otherwise normal development specifically of inhibitory neurons ( Huang et al., 1999 ). These data then suggest that neurotrophins are not directly involved in the mechanisms of synapse specificity responsible for activity-dependent plasticity, but they are not definitive because of the possibility that compensatory mechanisms in the overexpresser mice have elevated the threshold for neurotrophin actions.
The increase of GAD-65 immunoreactivity in the areas in which NT-4/5 was infused indicates that inhibitory circuitry was also stimulated to proliferate. This finding may explain the absence of hyperexcitability or epilepsy in NT-4/5 treated cortex despite a dramatic overall increase of synaptic connections, as inferred from the increase of spine-like processes and the presynaptic marker synaptophysin, and the increases in visual responses to non optimal stimuli ( Gillespie et al., 2000 ). The increase in inhibitory circuitry may be a direct effect of trkB stimulation. Alternatively, it may represent a homeostatic response to increased excitation, or vice versa. The time course of the physiological changes after the onset of NT-4/5 infusion does not include transient periods of hypo- or hyper-excitability, and is therefore not informative about which is more rapid or more nearly direct effect.
Modulation of inhibition has been shown to alter cortical plasticity ( Hensch et al., 1998 ), although even dramatic increases do not prevent it, but instead reverse its direction ( Reiter and Stryker, 1988 ; Hata and Stryker, 1994 ). The failure of MD to induce plasticity in visual cortex infused with NT-4/5 suggests that the increased inhibition itself is non-specific, or the combination of inhibition with an overall enhancement of synaptic activity leads to non-specific cortical circuits.
An alternative explanation for the effects of infused NT-4/5 is that the high concentrations of neurotrophin cause the cortical neurons to reacquire characteristics of earlier developmental stages. In this case, the excess growth is part of a growth program, rather than part of the mechanisms that generate specificity. Such growth programs are consistent with other actions of neurotrophins in regulating neuronal size and number ( Levi-Montalcini, 1966 ), rather than synapse specificity. Such programs might involve local synthesis of dendritic structural proteins or the rapid modulation of actin cytoskeleton dynamics ( Whitford et al., 2002 ). These aspects of neurotrophins’ action could also be responsible for the maintenance of neuronal structure in adulthood, since complete elimination of trkB activation in the adult mouse causes loss of specific neocortical cell populations ( Xu et al., 2000 ).
The growth-promoting effects of an excess of neurotrophins, either through the rewarding of both appropriate and inappropriate connections or through the resetting of the developmental clock to an earlier stage, can occur only during the critical period. Indeed, NT-4/5 infusion in vivo into visual cortex is without effect after the critical period and BDNF infusion causes overgrowth of geniculocortical afferents only in cortical period kittens, but not in adult cats ( Hata et al., 2000 ). Furthermore, BDNF infusion caused a reversal of the physiological ocular dominance shift in monocularly deprived kittens but not in adult cats ( Galuske et al., 1966 , 2000 ) and, similarly, it induces synaptic potentiation of the geniculocortical pathway in young but not in adult rats ( Jiang et al., 2001 ).
More recent studies of neurotrophin action in the cerebral cortex have turned almost exclusively to the mouse, where new genetic technology has made it possible to alter specific signaling mechanisms. In particular, the Shokat inhibitor, a chemogenetic manipulation, allows delivery of a small molecule to completely cut off the kinase activity of the trkB receptor, activation of which is the principal mechanism by which both NT-4/5 and BDNF operate ( Specht and Shokat, 2002 ; Chen et al., 2005 ). Blockade of trkB kinase activity had no effect on the loss of visual response to an eye occluded during the critical period or on the increase in response to the open eye, but it completely blocked the recovery of response to the deprived eye ( Kaneko et al., 2008 ). This finding suggested that BDNF acting on trkB was essential for the regrowth of connections serving the deprived eye, which was interpreted as the means by which responses to that eye were restored. This interpretation was reinforced by two additional findings. First, longitudinal anatomical imaging using 2-photon microscopy indicated that monocular deprivation and recovery was indeed accompanied by the loss and restoration of inputs serving the deprived eye ( Sun et al., 2019 ). Second, measurement of the time course of production of mature BDNF revealed that it had surged by 6 hours before the recovery of visual responses to the deprived eye in both of two different circumstances in which the time to recovery differed by 24 hours ( Kaneko and Stryker, 2023 ). These experiments, however, did not reveal the source—whether neuronal or glial—of the BDNF that mediated recovery of connections and responses. A clue to a surprising possible source was the finding that blocking the export of BDNF mRNA to the dendrites of pyramidal cells largely phenocopied the effect of blocking trkB signaling during monocular deprivation or that of an insufficiency of BDNF on the maintenance of cortical structure ( Kaneko et al., 2012 ). Future experiments are required to determine whether BDNF infusion in the mouse has the same effects as shown here from NT-4/5 infusion in the cat.
Taken together, the results described above implicate trkB neurotrophins in the growth and development of specific types of neurons, in determining the onset of the critical period, and in the maintenance of specific populations of neurons in adult life. To date, strong evidence for their role in synapse specificity is still lacking. | Conclusions for the role of neurotrophins in the formation of cortical circuits
The present findings provide anatomical confirmation of the suggestion made from physiological studies ( Gillespie et al., 2000 ) that NT-4/5 infusion into critical period visual cortex stimulates the formation of promiscuous neuronal connections. One interpretation of these findings is that the mechanisms for attaining specificity in neuronal connections involve the promotion of the growth of appropriate (but not inappropriate) presynaptic connections by limiting amounts of trkB neurotrophins secreted by the postsynaptic neuron, along with the growth of appropriate postsynaptic structures. In this case, the infusion experiments would have eliminated specificity by supplying sufficient neurotrophin for all inputs, including inappropriate ones.
In the case of monocular deprivation, if limiting amounts of trkB neurotrophins are the explanation for the loss of input serving the deprived eye, then a doubling of the neurotrophin level should be sufficient to preserve inputs from both eyes. The 2-3 fold overexpression of BDNF postnatally in mouse cortex does not, however, prevent the effects of monocular deprivation and make the two eyes equally effective. Instead, such overexpression of a trkB ligand results in a precocious critical period but otherwise normal plasticity ( Hanover et al., 1999 ). In this case, it appears that the excess neurotrophin has stimulated the precocious but otherwise normal development specifically of inhibitory neurons ( Huang et al., 1999 ). These data then suggest that neurotrophins are not directly involved in the mechanisms of synapse specificity responsible for activity-dependent plasticity, but they are not definitive because of the possibility that compensatory mechanisms in the overexpresser mice have elevated the threshold for neurotrophin actions.
The increase of GAD-65 immunoreactivity in the areas in which NT-4/5 was infused indicates that inhibitory circuitry was also stimulated to proliferate. This finding may explain the absence of hyperexcitability or epilepsy in NT-4/5 treated cortex despite a dramatic overall increase of synaptic connections, as inferred from the increase of spine-like processes and the presynaptic marker synaptophysin, and the increases in visual responses to non optimal stimuli ( Gillespie et al., 2000 ). The increase in inhibitory circuitry may be a direct effect of trkB stimulation. Alternatively, it may represent a homeostatic response to increased excitation, or vice versa. The time course of the physiological changes after the onset of NT-4/5 infusion does not include transient periods of hypo- or hyper-excitability, and is therefore not informative about which is more rapid or more nearly direct effect.
Modulation of inhibition has been shown to alter cortical plasticity ( Hensch et al., 1998 ), although even dramatic increases do not prevent it, but instead reverse its direction ( Reiter and Stryker, 1988 ; Hata and Stryker, 1994 ). The failure of MD to induce plasticity in visual cortex infused with NT-4/5 suggests that the increased inhibition itself is non-specific, or the combination of inhibition with an overall enhancement of synaptic activity leads to non-specific cortical circuits.
An alternative explanation for the effects of infused NT-4/5 is that the high concentrations of neurotrophin cause the cortical neurons to reacquire characteristics of earlier developmental stages. In this case, the excess growth is part of a growth program, rather than part of the mechanisms that generate specificity. Such growth programs are consistent with other actions of neurotrophins in regulating neuronal size and number ( Levi-Montalcini, 1966 ), rather than synapse specificity. Such programs might involve local synthesis of dendritic structural proteins or the rapid modulation of actin cytoskeleton dynamics ( Whitford et al., 2002 ). These aspects of neurotrophins’ action could also be responsible for the maintenance of neuronal structure in adulthood, since complete elimination of trkB activation in the adult mouse causes loss of specific neocortical cell populations ( Xu et al., 2000 ).
The growth-promoting effects of an excess of neurotrophins, either through the rewarding of both appropriate and inappropriate connections or through the resetting of the developmental clock to an earlier stage, can occur only during the critical period. Indeed, NT-4/5 infusion in vivo into visual cortex is without effect after the critical period and BDNF infusion causes overgrowth of geniculocortical afferents only in cortical period kittens, but not in adult cats ( Hata et al., 2000 ). Furthermore, BDNF infusion caused a reversal of the physiological ocular dominance shift in monocularly deprived kittens but not in adult cats ( Galuske et al., 1966 , 2000 ) and, similarly, it induces synaptic potentiation of the geniculocortical pathway in young but not in adult rats ( Jiang et al., 2001 ).
More recent studies of neurotrophin action in the cerebral cortex have turned almost exclusively to the mouse, where new genetic technology has made it possible to alter specific signaling mechanisms. In particular, the Shokat inhibitor, a chemogenetic manipulation, allows delivery of a small molecule to completely cut off the kinase activity of the trkB receptor, activation of which is the principal mechanism by which both NT-4/5 and BDNF operate ( Specht and Shokat, 2002 ; Chen et al., 2005 ). Blockade of trkB kinase activity had no effect on the loss of visual response to an eye occluded during the critical period or on the increase in response to the open eye, but it completely blocked the recovery of response to the deprived eye ( Kaneko et al., 2008 ). This finding suggested that BDNF acting on trkB was essential for the regrowth of connections serving the deprived eye, which was interpreted as the means by which responses to that eye were restored. This interpretation was reinforced by two additional findings. First, longitudinal anatomical imaging using 2-photon microscopy indicated that monocular deprivation and recovery was indeed accompanied by the loss and restoration of inputs serving the deprived eye ( Sun et al., 2019 ). Second, measurement of the time course of production of mature BDNF revealed that it had surged by 6 hours before the recovery of visual responses to the deprived eye in both of two different circumstances in which the time to recovery differed by 24 hours ( Kaneko and Stryker, 2023 ). These experiments, however, did not reveal the source—whether neuronal or glial—of the BDNF that mediated recovery of connections and responses. A clue to a surprising possible source was the finding that blocking the export of BDNF mRNA to the dendrites of pyramidal cells largely phenocopied the effect of blocking trkB signaling during monocular deprivation or that of an insufficiency of BDNF on the maintenance of cortical structure ( Kaneko et al., 2012 ). Future experiments are required to determine whether BDNF infusion in the mouse has the same effects as shown here from NT-4/5 infusion in the cat.
Taken together, the results described above implicate trkB neurotrophins in the growth and development of specific types of neurons, in determining the onset of the critical period, and in the maintenance of specific populations of neurons in adult life. To date, strong evidence for their role in synapse specificity is still lacking. | Current hypotheses on the mechanisms underlying the development and plasticity of the ocular dominance system through competitive interactions between pathways serving the two eyes strongly suggest the involvement of neurotrophins and their high affinity receptors. In the cat, infusion of the tyrosine kinase B ligand (trkB), neurotrophin-4/5 (NT-4/5), abolishes ocular dominance plasticity that follows monocular deprivation ( Gillespie et al., 2000 ), while tyrosine kinase A and C ligands (trkA and trkC) do not have this effect. One interpretation of this finding is that NT-4/5 causes overgrowth and sprouting of thalamocortical and/or corticocortical terminals, leading to promiscuous neuronal connections which override the experience-dependent fine tuning of connections based on correlated activity. The present study tested whether neurons in cortical regions infused with NT-4/5 showed anatomical changes compatible with this hypothesis. Cats at the peak of the critical period received chronic infusion NT-4/5 into visual cortical areas 17/18 via an osmotic minipump. Visual cortical neurons were labeled in fixed slices using the DiOlistics methods ( Gan et al., 2000 ) and analyzed in confocal microscopy. Infusion of NT-4/5 induced a significant increase of spine-like processes on primary dendrites and a distinctive sprouting of protuberances from neuronal somata in all layers. The increase of neuronal membrane was paralleled by an increase in density of the presynaptic marker synaptophysin in infused areas, suggesting an increase in the numbers of synapses. A contingent of these newly formed synapses may feed into inhibitory circuits, as suggested by an increase of GAD-65 immunostaining in NT-4/5 affected areas. These anatomical changes are consistent with the physiological changes in such animals, suggesting that excess trkB neurotrophin can stimulate the formation of promiscuous connections during the critical period. | ACKNOWLEDGEMENTS
Supported by NIH R37 EY02874. We thank Genentech for providing the NT-4/5 used in this study, Drs. J. Grutzendler and R. Wong for introducing us to the DiOlistics technique, and Karen MacLeod for invaluable help during the surgical procedures. | CC BY | no | 2024-01-16 23:49:20 | bioRxiv. 2023 Dec 22;:2023.12.20.572693 | oa_package/14/a0/PMC10769316.tar.gz |
|
PMC10769358 | 38187569 | INTRODUCTION
Bacterial replication requires available nutrients and transcription of anabolic genes. When nutrients become limiting, cells adapt to their environment by activating the broadly conserved bacterial stress response pathway known as the stringent response (SR) ( 1 ). During the SR, stressors such as nutrient limitation and/or heat shock are sensed by Rel/Spo homolog (RSH) proteins, which synthesize and/or hydrolyze the second messengers guanosine 5’-diphosphate 3’-diphosphate and guanosine 5’-triphosphate 3’-diphosphate (collectively known as (p)ppGpp) ( 1 , 2 ). (p)ppGpp broadly impacts DNA replication, transcription, translation, and metabolism by directly or indirectly interacting with various proteins to halt growth until conditions improve ( 1 ).
Caulobacter crescentus (hereafter Caulobacter ) is a Gram-negative freshwater α-proteobacterium with a dimorphic life cycle that allows it to exist as a nutrient-seeking swarmer cell or as a reproductive stalked cell ( 1 ). Nutrient limitation and (p)ppGpp accumulation slow the swarmer-to-stalked transition, likely to allow the swarmer cell more time to seek out nutrients before differentiating into a stalked cell ( 1 – 4 ). This transition is governed by regulatory proteins, whose levels and activities are impacted by (p)ppGpp upon SR activation ( 3 , 5 ). The availability of nutrients thus regulates the Caulobacter cell cycle, promoting anabolic processes and cell cycle progression in nutrient-rich environments, while favoring catabolic processes and nutrient-seeking behavior when nutrients are lacking.
Caulobacter inhabits oligotrophic environments and is regularly exposed to fluctuations in nutrient availability; therefore, cells must balance anabolism with SR activation by readily responding to nutrient deprivation ( 1 , 6 ). In Caulobacter , nutrient limitation is sensed by a single bifunctional RSH protein that is known as SpoT, which synthesizes (p)ppGpp during nutrient stress to initiate the SR ( 1 , 5 , 6 ). Ultimately, (p)ppGpp accumulation causes downregulation of genes required for translation, growth, and division, and upregulation of genes important for responding to stress, thus impacting the activities of proteins involved in both anabolism and SR activation ( 1 , 7 – 9 ). A major way in which (p)ppGpp exerts these effects on gene regulation is by binding directly to two sites on RNA polymerase (RNAP). These are referred to as “site 1” and “site 2” in Escherichia coli , and both sites are conserved in Caulobacter ( 1 , 5 – 11 ). Site 2 is formed at the interface between RNAP and the transcription factor DksA, and the binding of (p)ppGpp to this site specifically has been implicated in enhancing the influence of DksA on RNAP and leading to inhibition of rRNA promoter activity and decreased anabolism ( 1 , 7 , 9 , 12 ).
(p)ppGpp can also cause a downregulation of anabolic gene transcription by controlling the levels and activities of other transcriptional regulators. In Caulobacter , a major regulator of anabolic gene transcription and cell cycle progression is the broadly conserved transcription factor CdnL, which stands for “CarD N-terminal like” as it shares homology with the N-terminal domain of the Myxococcus xanthus transcription factor, CarD ( 13 – 16 ). Under nutrient-rich conditions, CdnL directly binds to RNAP and promotes transcription of housekeeping genes and growth ( 13 , 17 ). Similarly, the Mycobacterium tuberculosis CdnL homolog CarD was shown to be an activator of rRNA genes whose levels are positively correlated with mycobacterial growth ( 17 – 21 ). We have previously shown that loss of Caulobacter CdnL (Δ cdnL ) results in slowed growth and altered morphology, as well as downregulation of anabolic genes, such as those involved in macromolecule biosynthesis and ribosome biogenesis ( 14 , 16 ). These data are further supported by metabolomic analyses showing altered metabolite levels in Δ cdnL cells ( 14 ).
The physiological changes observed in Δ cdnL cells mirror many of those of cells undergoing stringent response activation. Indeed, mycobacterial CarD was recently found to be downregulated during stress conditions, though a link to the SR was not examined ( 22 ). These putative connections made us consider if CdnL is regulated in a similar manner during the Caulobacter SR. Here, we demonstrate that CdnL is regulated post-translationally during the SR, and that this regulation promotes effective adaptation to changes in nutrient availability by altering anabolic gene transcription. | METHODS
Caulobacter crescentus and Escherichia coli growth media and conditions
C. crescentus NA1000 cells were grown at 30°C in peptone-yeast extract (PYE) medium or minimal media described below. E. coli NEB Turbo (NEB Catalog #C2986K) and Rosetta(DE3)/pLysS cells were grown at 37°C and 30°C, respectively, in Luria-Bertani (LB) medium. Antibiotics for Caulobacter growth were used in liquid (solid) medium at the following concentrations: gentamycin, 1 (5) μg/mL; kanamycin, 5 (25) μg/mL; oxytetracycline, 1 (2) μg/mL; and spectinomycin, 25 (100) μg/mL. Streptomycin was used at 5 μg/mL in solid medium. E. coli antibiotics were used in liquid (solid) medium as follows: ampicillin, 50 (100) μg/mL; gentamicin, 15 (20) μg/mL; kanamycin, 30 (50) μg/mL; oxytetracycline, 12 (12) μg/mL; and spectinomycin, 50 (50) μg/mL. Strains and plasmids used in this study are listed in , Table S3 .
Starvation experiments
Cells were grown overnight in 2 – 4 mL minimal media with appropriate antibiotics. For both carbon and nitrogen starvation, M2G (M2 with 0.2% glucose w/v) was used ( 40 ). For phosphate starvation, Hutner base imidazole-buffered glucose glutamate medium (HIGG) was used ( 41 ). The next day, mid-log phase cells were harvested by centrifugation and washed thrice with nutrient-poor media. For carbon starvation, M2 media (without glucose) was used; for nitrogen starvation, M2G lacking NH 4 Cl was used; and for phosphate starvation, HIGG lacking Na 2 HPO 4 -KH 2 PO 4 was used. After the final wash, cells were resuspended in 5 mL nutrient-limited media, divided into four 1.2 mL cultures, and incubated at 30°C shaking at 225 rpm. OD 600 was taken every 30 minutes from 0 to 90 minutes. 0.5 or 1 mL cells were harvested by centrifugation and resuspended in OD 600 /0.003 or OD 600 /0.003 uL 1X SDS loading dye, respectively, for subsequent immunoblotting. Three biological replicates were obtained for each strain in each condition.
RelA’ induction
EG1799 and EG1800 were grown overnight in 5 mL M2G with appropriate antibiotics. The next day, mid-log phase cells were diluted to OD 600 = 0.2 in 10 mL M2G with 0.3% xylose (w/v). Cultures were distributed into seven 1.4 mL aliquots and incubated as described above. OD 600 was taken every 30 minutes from 0 to 120 minutes and protein samples for immunoblotting were made by resuspension in 1X SDS loading dye. Three biological replicates were obtained for each strain.
Transcriptional shut-off
EG3190, EG3193, and EG3194 were grown overnight in 2 – 4 mL M2 with 0.3% xylose and appropriate antibiotics. The next day, mid-log phase cells were harvested by centrifugation and washed thrice with M2G (to shut off transcription) or M2 (to shut off transcription and starve cells). After the final wash, cells were resuspended in 5 mL M2G or M2 and were incubated as described above. A total of 3 biological replicates were obtained for each strain. Immunoblotting samples were prepared by resuspension in 1X SDS loading dye.
ClpX* induction
EG802 and EG865 were grown overnight in 2 – 4 mL M2G with appropriate antibiotics. The next day, mid-log phase cultures were diluted to OD 600 = 0.1 – 0.25 in 5 mL M2G supplemented with 0.3% xylose (w/v) and grown for 2 hours. The starvation was performed as described above. Three biological replicates were obtained for each strain.
Immunoblotting
Equivalent OD units of cell lysate were loaded on an SDS-PAGE gel following cell harvest by centrifugation, lysing 1X SDS loading dye, and boiling for 5 – 10 minutes. SDS-PAGE and protein transfer to nitrocellulose membranes were followed using standard procedures. Antibodies were used at the following dilutions: CdnL 1:10,000 ( 14 ); MreB 1:10,000 (Regis Hallez, University of Namur); SpmX 1:20,000 (Patrick Viollier, University of Geneva); GFP 1:2,000 (Clonetech Labs Catalog #NC9777966); HRP-labeled α-rabbit secondary 1:10,000 (BioRAD Catalog #170-6515); and/or HRP-labeled α-mouse secondary (Cell Signaling Technology Catalog #7076S). Clarity Western Electrochemiluminescent substrate (BioRAD Catalog #170-5060) was used to visualize proteins on an Amersham Imager 600 RGB gel and membrane imager (GE).
Protein purification
CdnL and CdnLDD were overproduced in Rosetta (DE3) pLysS E. coli from pEG1129 (His-SUMO-CdnL) and pEG1634 (His-SUMO-CdnLDD), respectively. Cells were induced with 1mM IPTG for 3 hours at 30°C. Cell pellets were resuspended in Column Buffer A (50 mM Tris-HCl pH 8.0, 300 mM NaCl, 10% glycerol, 20 mM imidazole, 1 mM β-mercaptoethanol), flash frozen in liquid nitrogen, and stored at −80°C. To purify the His-SUMO tagged proteins, pellets were thawed at 37°C, and 10 U/mL DNase I, 1 mg/mL lysozyme, and 2.5 mM MgCl2 were added. Cell slurries were rotated at room temperature for 30 minutes, then sonicated and centrifuged for 30 minutes at 15,000 x g at 4°C. Protein supernatants were then filtered and loaded onto a pre-equilibrated HisTrap FF 1mL column (Cytiva, Marlborough, Massachusetts). His-SUMO-CdnL and His-SUMO-CdnLDD were eluted in 30% Column Buffer B (same as Column Buffer A but with 1M imidazole), and peak fractions were concentrated. The His-Ulp1 SUMO protease was added to a molar ratio (protease:protein) of 1:290 and 1:50 for His-SUMO-CdnL and His-Sumo-CdnLDD, respectively, and dialyzed into 1 L Column Buffer A overnight at 4°C. The cleaved protein solutions were again loaded onto a HisTrap FF 1mL column. Peak flowthrough fractions were combined, concentrated, and applied to a Superdex 200 10/300 GL (Cytiva) column equilibrated with storage buffer (50 mM HEPES-NaOH pH 7.2, 100 mM NaCl, 10% glycerol). Peak fractions were combined, concentrated, snap-frozen in liquid nitrogen, and stored at −80°C. ClpX was purified using a similar protocol as described for E. coli ClpX ( 24 ). Recombinant his-tagged ClpP was purified as described ( 42 ).
In vitro proteolysis assays
In vitro proteolysis reactions were performed with purified ClpX, ClpP, and CdnL or CdnLDD as previously described ( 25 ).
Monitoring CdnL protein levels
EG865, EG1139, and EG2530 were inoculated in 2 mL M2G. The next day, the cultures were diluted to an OD 600 of ~0.001 and grown for approximately 18 hours or until the cultures reached an OD 600 of ~0.4 – 0.5. At OD 600 of ~0.4 – 0.5, 500 μL of culture sample was taken to make protein samples for immunoblotting as described above. Samples were then taken in the following 1, 2, 4, 6, 7, 8, and 25 hours. Immunoblotting was performed as described above, except following the transfer, membranes were stained with Ponceau stain for total protein normalization.
Chromatin immunoprecipitation coupled to deep sequencing (ChIP-seq)
EG865, EG1898, and EG2530 were grown overnight in 6 mL M2G (M2 with 0.2% glucose w/v) at 30°C (shaking at 200 rpm), respectively. The next day, cultures were diluted into 100 mL M2G and grown to an OD 660 of 0.5. Cells were harvested by centrifugation at 8,000 rpm for 10 min at 25°C, and cell pellets were washed 3 times with 50 mL of M2 media (without glucose) at 25°C. Each of the washed cell pellets was split into two and used to inoculate 50 mL of M2 (carbon starvation) and 50 mL of M2G (preheated culture medium at 30°C), respectively. Cultures were incubated at 30°C (shaking at 200 rpm) for additional 60 minutes and then, supplemented with 10 μM sodium phosphate buffer (pH 7.6) and treated with formaldehyde (1% final concentration) at RT for 10 min to achieve crosslinking. Subsequently, the cultures were incubated for an additional 30 min on ice and washed three times in phosphate buffered saline (PBS, pH 7.4). The resulting cell pellets were stored at −80°C. After resuspension of the cells in TES buffer (10 mM Tris-HCl pH 7.5, 1 mM EDTA, 100 mM NaCl) containing 10 mM of DTT, the cell resuspensions were incubated in the presence of Ready-Lyse lysozyme solution (Epicentre, Madison, WI) for 10 minutes at 37°C, according to the manufacturer’s instructions. Lysates were sonicated (Bioruptor Pico) at 4°C using 15 bursts of 30 sec to shear DNA fragments to an average length of 0.3–0.5 kbp and cleared by centrifugation at 14,000 rpm for 2 min at 4°C. The volume of the lysates was then adjusted (relative to the protein concentration) to 1 mL using ChIP buffer (0.01% SDS, 1.1% Triton X-84 100, 1.2 mM EDTA, 16.7 mM Tris-HCl [pH 8.1], 167 mM NaCl) containing protease inhibitors (Roche) and pre-cleared with 80 μL of Protein-A agarose (Roche, www.roche.com ) and 100 μg BSA. Five percent of the pre-cleared lysates were kept as total input samples. The rest of the pre-cleared lysates were then incubated overnight at 4°C with polyclonal rabbit antibodies targeting CdnL (1:1,000 dilution). The Immuno-complexes were captured by incubation with Protein-A agarose beads (pre-saturated with BSA) during a 2 hr incubation at 4°C and then, washed subsequently with low salt washing buffer (0.1% SDS, 1% Triton X-100, 2 mM EDTA, 20 mM Tris-HCl pH 8.1, 150 mM NaCl), with high salt washing buffer (0.1% SDS, 1% Triton X-100, 2 mM EDTA, 20 mM Tris-HCl pH 8.1, 500 mM NaCl), with LiCl washing buffer (0.25 M LiCl, 1% NP-40, 1% deoxycholate, 1 mM EDTA, 10 mM Tris-HCl pH 8.1) and finally twice with TE buffer (10 mM Tris-HCl pH 8.1, 1 mM EDTA). The immuno-complexes were eluted from the Protein-A beads with two times 250 μL elution buffer (SDS 1%, 0.1 M NaHCO3, freshly prepared) and then, just like the total input sample, incubated overnight with 300 mM NaCl at 65°C to reverse the crosslinks. The samples were then treated with 2 μg of Proteinase K for 2 hr at 45°C in 40 mM EDTA and 40 mM Tris-HCl (pH 6.5). DNA was extracted using phenol:chloroform:isoamyl alcohol (25:24:1), ethanol-precipitated using 20 μg of glycogen as a carrier and resuspended in 30 μl of DNAse/RNAse free water.
Immunoprecipitated chromatins were used to prepare sample libraries used for deep-sequencing at Fasteris SA (Geneva, Switzerland). ChIP-Seq libraries were prepared using the DNA Sample Prep Kit (Illumina) following manufacturer instructions. Single-end run was performed on an Illumina Next-Generation DNA sequencing instruments (NextSeq High), 50 cycles were performed and yielded several million reads per sequenced samples. The single-end sequence reads stored in FastQ files were mapped against the genome of C. crescentus NA1000 (NC_011916.1) using Bowtie2 Version 2.4.5+galaxy1 available on the web-based analysis platform Galaxy ( https://usegalaxy.org ) to generate the standard genomic position format files (BAM). ChIP-Seq reads sequencing and alignment statistics are summarized in DatasetS1 . Then, BAM files were imported into SeqMonk version 1.47.2 ( http://www.bioinformatics.babraham.ac.uk/projects/seqmonk/ ) to build ChIP-Seq normalized sequence read profiles. Briefly, the genome was subdivided into 50 bp, and for every probe, we calculated the number of reads per probe as a function of the total number of reads (per million, using the Read Count Quantitation option; DatasetS1 ). Using the web-based analysis platform Galaxy ( https://usegalaxy.org ), CdnL(M2+G) ChIP-Seq peaks were called using MACS2 Version 2.2.7.1+galaxy0 (No broad regions option) relative to the ΔcdnL (M2+G) negative ChIP-seq control. The q-value (false discovery rate, FDR) cut-off for called peaks was 0.05. Peaks were rank-ordered according to their fold-enrichment values ( DatasetS1 , the 107 ChIP-Seq CdnL(M2+G) statistical peaks with a fold enrichment greater than 2 were retained for further analysis). MACS2 analysed data illustrated in Figure S5 (CdnL relevant peaks identified by ChIP-seq) are provided in DatasetS1 . Then, CdnL(M2+G) ChIP-seq equivalent peaks (overlapping peak “start” and “end” MACS2 coordinates) were searched in the MACS2 statistical peak analyses of ChIP-Seq CdnL(M2−G), ChIP-Seq CdnLDD(M2+G), and ChIP-Seq CdnLDD(M2−G) datasets (relative to the ΔcdnL (M2+G) or (M2−G) negatives ChIP-seq controls, accordingly). In cases where no overlapping peaks were identified, a default fold enrichment value of 1 was assigned. To facilitate visualization and ChIP-seq analysis, these data were subsequently submitted to the http://www.heatmapper.ca/expression/ website to generate a heat map (Parameters: Clustering method “average linkage”; Distance measurement method “Pearson”; Scale “Row Z-score”). Row Z-score = (CdnL Fold enrichment value in the sample of interest - Mean CdnL Fold enrichment across all samples) / Standard Deviation. Analysed data illustrated in Figure 3A as well as the list of genes composing each group are provided in DatasetS1 . For each group defined during the heat map analysis, the list of genes potentially regulated by CdnL (presence of a CdnL statistically significant peak detected on the gene’s promoter region) was submitted to the DAVID website (The Database for Annotation, Visualization and Integrated Discovery; https://david.ncifcrf.gov/home.jsp ) to identify enriched biological themes, particularly GO terms ( DatasetS1 ). Sequence data have been deposited to the Gene Expression Omnibus (GEO) database (GSE249185 series, accession numbers GSM7927858 – GSM7927866).
RNA sequencing (RNA-seq)
For 0 and 60 minute starvation samples, cultures of 3 independent colonies of EG865 and EG2530 were inoculated into 2 mL M2G overnight. The next day, cultures were diluted into 4 mL M2G and grown to an OD 600 of 0.4 – 0.6. Cells were prepared for carbon starvation as described above, except after the final wash, cells were resuspended in 4 mL M2 and were split into two 2 mL samples each. One sample for each replicate was incubated at 30°C shaking at 225 rpm for 60 minutes, while the other was stabilized using RNAprotect Bacteria Reagent (Qiagen Catalog #76506) following manufacturer’s instructions. Briefly, 4 mL RNAprotect was added to 15 mL conical tubes, to which the 2 mL culture samples were added. The conical tubes were vortexed for 5 seconds, incubated at room temperature for 5 minutes, and then centrifuged for 10 minutes at 5,000 x g. The pellets were flash frozen in liquid nitrogen and stored at −80°C. The 60-minute starved samples were harvested in the same way.
For 24 hour starved and 60 minute recovery samples, cultures of 3 independent colonies were inoculated into 6 mL M2G and grown overnight. Once cultures were at an OD 600 of 0.4 – 0.6, cultures were prepared for carbon starvation as described above except after final wash, cells were resuspended in 6 mL M2 and incubated at 30°C shaking at 225 rpm for 24 hours. After 24 hours, 2 mL of each culture was removed and stabilized using RNAprotect as described above and flash frozen in liquid nitrogen. To the remaining 4 mL of culture, 40 μL of 20% glucose solution was added, and cultures were incubated for 60 minutes. Recovery samples were stabilized with RNAprotect and flash frozen in liquid nitrogen.
All samples were processed at SeqCenter (Pittsburg,PA). There, samples were DNAse treated with Invitrogen DNAse (RNAse free). Library preparation was performed using Illumina’s Stranded Total RNA Prep Ligation with Ribo-Zero Plus kit and 10bp IDT for Illumina indices and Caulobacter-specific rRNA depletion probes. Sequencing was done on a NextSeq2000 giving 2x51bp reads. Quality control and adapter trimming was performed with bcl2fastq (version 2.20.0.445; default parameters). Read mapping was performed with HISAT2 (version 2.2.0; default parameters + ‘--very-sensitive’) ( 43 ). Read quantification was performed using Subread’s featureCounts functionality (version 2.0.1; default parameters + ‘−Q 20’) ( 44 ). Read counts were loaded into R (version 4.0.2; default parameters) and were normalized using edgeR’s (version 1.14.5; default parameters) Trimmed Mean of M values (TMM) algorithm ( 45 ). Subsequent values were then converted to counts per million (cpm). Differential expression analysis was performed using edgeR’s exact test for differences between two groups of negative-binomial counts. All RNA-seq data have been deposited in the Sequence Read Archive (SRA) under accession numbers SRR27130076 - SRR27130078, SRR27130555 - SRR27130557, and SRR27146334 - SRR27146351, and are associated with BioProject PRJNA1049818.
Starvation outgrowth growth curves
EG865, EG2530, EG2652, EG2756, EG2776, and EG2861 were grown overnight in 2 mL M2G. The next day, biological triplicates were diluted to OD 600 = 0.05 in 100 μL M2G in a 96 well plate. A Cytation1 imaging reader (Agilent, Biotek) measured absorbance every 30 minutes for 36 hours with intermittent shaking. The remainder of each 2 mL culture was back-diluted into 5 mL M2G and incubated overnight. The next day, log-phase cells were harvested by centrifugation at 8,250 x g for 5 minutes, washed thrice with M2, resuspended in a total volume of 4.5 mL M2, and incubated for 24 hours. Following the 24-hour starvation, another growth curve in M2G was run as described above. Approximate time to OD = 0.1 was calculated by solving a trendline using averaged triplicates just prior to OD 0.1 and just after OD 0.1. Doubling times were calculated using GraphPad Prism software.
Competition experiments
EG3402, EG3404, EG3406, and EG3408 were grown overnight in 2 mL PYE with appropriate antibiotics. The next day, overnight cultures were pelleted at 16,300 x g for one minute and washed twice with M2G. Following the final wash, cells were resuspended in 10 mL M2G with appropriate antibiotics and grown overnight. The next day, stationary phase cells were diluted into 50 mL M2G with appropriate antibiotics and allowed to grow a minimum of 6 hours (approximately 2 doublings in M2G) to an OD600 of 0.3 – 0.6. Cells were then synchronized with modifications to the original protocol ( 46 ). Briefly, cells were pelleted at 6,000 x g for 10 minutes and then resuspended in 1.5 mL cold 1X M2 salts. The resuspension was transferred to 15 mL Corex tubes. 1.5 mL cold Percoll was added and cells were centrifuged at 15,000 x g for 20 minutes at 4°C. Swarmer cells were transferred to 15 mL conicals, 1X M2 salts were added to fill the tube, and cells were pelleted at 10,000 x g for 5 minutes. Cell pellets were resuspended in 1 mL 1X M2 salts and centrifuged at 16,300 x g for 1 minute. The resulting pellet was resuspended in M2 for nutrient-poor conditions or M2G for nutrient-rich conditions and OD 600 was recorded.
When starting in nutrient-poor (M2) conditions, cultures were equally diluted to OD 600 = 0.25 - 0.4 and mixed 1:1 (EG3402 : EG3408, EG3404 : EG3406) in 5 mL M2 and incubated for 24 hours. Prior to incubation, 50 μL of the mixed cultures were serially diluted, and 100 μL of 10 −4 – 10 −6 dilutions were plated on both PYE-spectinomycin and PYE-kanamycin plates. When starting in nutrient-rich conditions, the same procedure was followed except after plating, the mixed cultures were diluted to OD ~ 0.0001 and 0.00005 in 5 mL M2G and allowed to grow for 24 hours.
Every 24 hours, a culture sample was taken for plating as described above. Colony forming units (CFUs) were counted on plates following a 2-day incubation at 30°C, and a ratio called the competitive index was calculated from the CFUs formed by the CdnLDD strains compared to the WT strains (EG3408/EG3402; EG3406/EG3404) on the plates that yielded the maximum number of countable colonies.
If in nutrient-rich conditions, a volume of cells approximately equal to OD 600 = 0.25 were pelleted, washed twice with M2, resuspended in 5 mL M2, and incubated for a 24-hour starvation. If in nutrient-poor conditions, cells were diluted to an OD ~ 0.0003 in 5 mL M2G and incubated for a 24-hour recovery. This was repeated for a total of three rounds. The entire process was repeated for a total of 4 biological replicates for the competition starting in nutrient-poor conditions and 3 biological replicates for the competition starting in nutrient-rich conditions. For the competition in only nutrient-rich conditions, a similar procedure was followed, but every 24 hours cells were diluted to an OD ~ 0.00005 in 5 mL M2G for a total of 6 rounds. The entire process was repeated to obtain 3 biological replicates. | RESULTS
CdnL is cleared during the stringent response in a SpoT-dependent manner
Our prior work indicated that many anabolic genes are regulated in a CdnL-dependent manner, apparently echoing transcriptional changes that occur during the SR. This prompted us to review transcriptomic data from our lab and others to explore a potential relationship between CdnL and the SR. In comparing Δ cdnL cells to wild-type (WT) starved cells, we find 213 genes downregulated during carbon starvation are also downregulated in Δ cdnL , which is a significant enrichment as determined by hypergeometric probability ( Fig. 1A ) ( 14 , 23 ). This enrichment is, at least in part, SpoT-dependent, as 130 genes that are downregulated in Δ cdnL cells are downregulated during carbon starvation in a SpoT-dependent manner, which is also statistically significant ( Fig. 1A ) ( 6 , 14 ). These overlaps suggest a link between activation of the SR by SpoT and the absence of CdnL.
To investigate a potential connection between SR activation and loss of CdnL, we assessed CdnL protein levels during nitrogen, carbon, and phosphate starvation. Both carbon and nitrogen limitation activate the SR in Caulobacter , while phosphate limitation does not ( 1 ). To do this, Caulobacter cells were grown in minimal media, then washed and incubated in media lacking a carbon, nitrogen, or phosphate source. Cell lysates were sampled at up to 90 minutes of starvation and CdnL levels were assessed by immunoblotting. We found that CdnL was cleared in WT cells under both nitrogen and carbon starvation with half-lives of 18 and 15 min, respectively ( Fig. 1B , C and Table S1 ). Notably, CdnL levels remained stable under phosphate starvation ( Fig. 1B , C and Table S1 ).
The fact that CdnL is cleared upon carbon and nitrogen starvation, but not phosphate starvation, is consistent with CdnL regulation being downstream of SR activation. Therefore, we asked if CdnL clearance depended on the only RSH enzyme in Caulobacter , SpoT, by repeating these starvation experiments in cells lacking SpoT (Δ spoT ) ( 6 ). During nitrogen and carbon starvation, CdnL levels were significantly more stable in the Δ spoT background than in WT, with the half-lives increasing to 60 min and over 500 min, respectively ( Fig. 1D , E , Fig. S1A , B , and Table S1 ). CdnL was also stabilized during carbon starvation in the presence of a (p)ppGpp synthetase-dead SpoT variant (SpoTY323A) ( Fig. 1D , E and Table S1 ) ( 2 ). These observations indicate that both SpoT and (p)ppGpp are necessary for CdnL clearance during starvation.
We next tested if (p)ppGpp is sufficient for CdnL clearance using a strain harboring a xylose-inducible, constitutively active version of the E. coli (p)ppGpp synthetase RelA (RelA’) or a catalytically inactive RelA variant (RelA’-dead) as a negative control ( 4 ). We found that CdnL was cleared under nutrient-replete conditions upon expression of RelA’ with a half-life of 103 min, while CdnL is stable in the presence of RelA’-dead, indicating that (p)ppGpp is sufficient to clear CdnL ( Fig. 1F , G and Table S1 ). Taken together, these results suggest that CdnL is cleared during conditions that activate the SR, and that (p)ppGpp is both necessary and sufficient to induce clearance. For simplicity, we chose to use carbon starvation to probe the mechanism and importance of CdnL clearance during stringent activation.
Transcriptional control of cdnL is not sufficient to regulate CdnL levels during the SR
We observed that CdnL protein levels decrease upon carbon starvation, but it was unclear if regulation of CdnL levels during starvation occurs transcriptionally, post-transcriptionally, and/or post-translationally. Indeed, previous studies showed that cdnL transcript levels also decrease upon carbon starvation ( 23 ). We first asked if transcriptional activity at the cdnL promoter contributes significantly to controlling CdnL levels during the SR. To do this, we expressed cdnL from a xylose-inducible promoter in Δ cdnL , Δ spoT Δ cdnL , and SpoTY323A Δ cdnL backgrounds. In this system, we can shut off cdnL transcription independently of SR activation or (p)ppGpp production by washing out xylose and resuspending cells in nutrient-replete media lacking xylose (M2G), or in carbon starvation media lacking both xylose and glucose (M2). Under carbon starvation conditions and after the removal of xylose to prevent further cdnL transcription, CdnL was rapidly cleared in Δ cdnL cells with a half-life of 13 min. In contrast, for Δ spoT Δ cdnL and SpoTY323A Δ cdnL cells, CdnL levels remained stable with half-lives of 139 and 334 min, respectively ( Fig. S2A , B and Table S1 ). This suggests that transcriptional control is not sufficient for CdnL clearance. There appear to be additional factors governing CdnL stability during the SR, however. While CdnL was cleared more rapidly in starvation conditions compared to nutrient-replete conditions in a Δ cdnL background after the removal of xylose (13 min compared to 27 min), we found that in the Δ spoT Δ cdnL and SpoTY323A Δ cdnL backgrounds, CdnL was actually cleared more rapidly in nutrient-replete conditions compared to starvation conditions after the removal of xylose (70 and 75 min compared to 139 and 334 min, respectively) ( Fig. S2A – D and Table S1 ). These data imply that CdnL levels are regulated post-transcriptionally in a manner that is partially SpoT-dependent.
CdnL is a ClpXP target
Since transcriptional regulation is not sufficient to control CdnL levels during the SR, we next asked if CdnL is regulated post-translationally. CdnL bears two alanine residues on the C-terminus, which is a common degradation signal (degron) recognized by the ClpXP protease ( 24 , 25 ). It was previously shown that the CdnL C-terminus was important for ClpXP-mediated degradation of CdnL in vivo , which caused us to wonder how this might relate to regulation of CdnL during starvation ( 13 ). We asked if eliminating this putative degron impacts CdnL levels during starvation by changing the two alanine residues to aspartate residues, thus creating the variant CdnLDD ( 25 ). We found that CdnLDD protein was not cleared during either nitrogen or carbon starvation, suggesting that ClpXP targets CdnL for degradation during the SR ( Fig. 2A , B and Fig. S1A , C ). We asked if CdnL is a direct proteolytic target of ClpXP in vitro and found that purified ClpXP was capable of degrading purified CdnL, but not purified CdnLDD ( Fig. 2C ). The ability of ClpXP to degrade CdnL in vitro is not affected by (p)ppGpp, as the addition of (p)ppGpp did not further stimulate CdnL turnover. Finally, we assessed if ClpX activity was required in vivo for CdnL clearance during carbon starvation using a dominant-negative ATPase-dead ClpX variant (ClpX*) ( 26 , 27 ). When cells were starved of carbon after a 1-hour induction of ClpX*, CdnL levels were once again stabilized ( Fig. 2D , E ). These data confirm that CdnL is a ClpXP target as well as demonstrate that ClpXP-mediated proteolysis is required for regulation of CdnL during carbon starvation.
It is evident that the CdnL C-terminus is critical for the regulation of CdnL by ClpXP. To assess if the CdnL C-terminus is sufficient for ClpXP-mediated degradation during starvation, we tagged GFP with the last 15 amino acids from the CdnL tail (GFP-AA) or with the last 15 amino acids from the stabilized CdnL tail (GFP-DD). We then expressed these constructs on a plasmid under control of the ruv operon promoter, as transcript levels for the ruvA DNA helicase gene are not affected during carbon starvation ( 6 ). Strikingly, GFP-AA was rapidly degraded during carbon starvation while GFP-DD remained stable, demonstrating that the CdnL tail is indeed sufficient for degradation of GFP ( Fig. 2F , G ). We next sought to test if there is a SpoT-dependence to this degradation. To do this, we put the GFP-AA construct into the Δ spoT background and repeated the carbon starvation. GFP-AA was degraded more slowly in a Δ spoT background than in WT, with a half-life of 68 min in comparison to the 14 min half-life in the WT background, indicating that there is at least a partial SpoT-dependence to ClpXP-mediated degradation ( Fig. 2H , I and Table S1 ).
The CdnL-RNAP interaction is not sufficient to protect CdnL from proteolysis
There appear to be other factors governing CdnL stability outside of the C-terminus since we found that GFP-AA was cleared, albeit slowly, in Δ spoT cells, while CdnL itself remained stable ( Fig. 2H , I and Fig. 1D , E ). Since CdnL interacts with RNAP while GFP does not, we hypothesized that the interaction with RNAP may reduce CdnL degradation, possibly by protecting CdnL from proteolysis by ClpXP. To assess this, we constructed two CdnL point mutants, V39A and P54A, that have a reduced affinity for RNAP ( 13 ). We then assessed their stabilities during carbon starvation. Both the V39A and P54A mutants exhibited reduced half-lives (15 and 18 min, respectively) compared to WT CdnL (32 min) ( Fig. S3A , B and Table S1 ). These observations are consistent with the interaction between CdnL and RNAP contributing to CdnL stability under carbon starvation conditions. To test the sufficiency of this interaction in determining CdnL stability, we put the V39A and P54A point mutants into a Δ spoT background and performed another carbon starvation. If the CdnL-RNAP interaction serves as a barrier to CdnL proteolysis by ClpXP, we expected that these point mutants with a reduced RNAP interaction may still be less stable than WT CdnL in a Δ spoT background. However, we found that the point mutants are equally as stable as WT CdnL in the Δ spoT background ( Fig. S3C , D ). These results indicate that disrupting the RNAP-CdnL interaction is not sufficient to promote CdnL clearance during starvation and reiterates the dependency on SpoT to stimulate CdnL turnover.
CdnL is cleared during stationary phase in a SpoT and ClpXP independent manner
In addition to starvation, another time in which cells experience nutrient stress is during stationary phase when resources become depleted as a population saturates. (p)ppGpp levels peak upon entry into stationary phase, and then dip to stabilize at levels that are still higher than basal levels occurring during logarithmic growth ( 28 ). As we have found that both SpoT and ClpXP play important roles in maintaining CdnL levels during starvation when (p)ppGpp levels are high, we were curious to understand how these factors might contribute to regulating CdnL levels during stationary phase. To this end, we assessed CdnL protein levels from mid-log to late stationary phase in WT, CdnLDD, and Δ spoT ( Fig. S4 ). CdnLDD levels were consistently higher than CdnL levels in both WT and Δ spoT , with Δ spoT having the lowest CdnL levels throughout. In all strains, CdnL levels started to decline at an OD 600 of about 1.0. This reduction was stark for both WT and Δ spoT , while it was more gradual for CdnLDD. Surprisingly, we found that CdnL was cleared in all strains by an OD 600 of about 1.6, suggesting this mechanism of regulation is (p)ppGpp-independent and distinct from that of carbon starvation.
CdnL and CdnLDD exhibit reduced chromosomal binding during starvation
Mechanistically, we have shown that CdnL is cleared post-transcriptionally in a SpoT- and ClpXP-dependent manner, and that mutation of the C-terminal degradation signal to CdnLDD stabilizes protein levels during starvation. As CdnL is a transcription factor, we sought to understand if stabilization of CdnL protein would permit increased CdnL occupancy on the chromosome during starvation, presumably via interaction with RNAP, thus leading to transcriptomic changes that would be disadvantageous during carbon starvation. To begin to understand the consequences of CdnL regulation on transcriptional reprogramming during nutrient limitation, we used chromatin immunoprecipitation sequencing (ChIP-seq) to assess the occupancy of WT CdnL and CdnLDD on the chromosome under both nutrient-replete (M2G) conditions and after 60 minutes of carbon starvation. We selected this time point as WT CdnL was almost completely cleared ( Fig. 1B , C ) and broad transcriptional changes are reported at this point ( 23 ). After identifying 107 peaks with a > 2-fold enrichment in the binding profile of WT CdnL in M2G compared to the Δ cdnL control, we looked at these peaks in the binding profiles of WT CdnL and CdnLDD in both M2G and M2 ( Fig. S5 and Dataset S1 ). In M2G, the binding profiles of WT CdnL and CdnLDD were well correlated with each other and with our prior ChIP-Seq analysis of CdnL association across the genome ( Fig. 3A ) ( 14 ). Consistent with our previous transcriptomic analyses, DAVID analysis of locus function revealed a significant enrichment in aminoacyl tRNA biosynthesis and ribosomal genes ( Dataset S1 ) ( 14 ). During carbon starvation (M2 media), we observed three clusters of loci with distinct CdnL association profiles. WT CdnL had an overall reduction in binding at Group 1 genomic loci, while Groups 2 and 3 showed either equal or increased CdnL binding ( Fig. 3A ). This is surprising given that CdnL protein levels are significantly reduced in the absence of carbon ( Fig. 1B , C ). Interestingly, CdnLDD showed an overall decrease in binding after 60 minutes of starvation, similar to WT, despite the fact that CdnLDD levels in M2 are comparable to WT CdnL levels in M2G ( Fig. 3A ). These observations suggest that while CdnLDD is highly stable, it largely does not associate with the DNA after 60 minutes of carbon starvation.
CdnL stabilization alters ribosomal and metabolic gene transcription
Although CdnLDD does not interact with the chromosome after 60 minutes of carbon starvation, we were interested in understanding if CdnLDD could still facilitate transcriptional changes. To address this, we used RNA-sequencing (RNA-seq) to assess the transcriptional profiles of WT and CdnLDD at 0 and 60 minutes of carbon starvation. In comparing CdnLDD to WT at 0 minutes carbon starvation, we found that 101 genes were at least 2-fold differentially regulated ( Fig. 3B and Dataset S2 ). This number is smaller than the 525 genes that we had previously found to be differentially regulated when comparing WT to Δ cdnL ; however, Δ cdnL cells have growth and morphological defects under nutrient-rich conditions, while CdnLDD cells do not exhibit an obvious phenotype ( 14 ). DAVID functional annotation analyses pointed to an enrichment of transcripts related to the ribosome in CdnLDD, which reinforces our ChIP-seq data and previous transcriptomic analyses implicating CdnL in the regulation of anabolic genes ( Fig. 3A and Dataset S2 ) ( 14 ).
At 60 minutes of starvation, 71 genes were found to be at least 2-fold differentially regulated between WT and CdnLDD ( Fig. 3C and Dataset S3 ). This set of genes showed no overlap with the 101 genes differentially regulated at 0 minutes of starvation. The most highly upregulated transcript in WT compared to CdnLDD was CCNA_03545 (ribosome silencing factor), which acts to separate the large and small ribosomal subunits during stationary phase and stress, further implicating CdnLDD in altered ribosome activity ( Fig. 3C and Dataset S3 ) ( 29 ). Additionally, DAVID functional annotation analyses indicated a relative upregulation of transcripts associated with toxin-antitoxin systems and stress responses in WT, suggesting that CdnLDD may not be appropriately responding to nutrient deprivation ( Fig. 3C and Dataset S3 ). Nonetheless, the relatively small differences in the transcriptional profiles between WT and CdnLDD is consistent with their similar binding profiles demonstrated by ChIP-seq and further suggests that CdnL stabilization does not cause global changes in transcription after 60 minutes of starvation.
While CdnL stabilization does not globally impact the immediate transcriptional response to starvation, we wondered how the transcriptome might be impacted over a longer starvation. It is possible that Caulobacter would experience even longer periods of starvation in nature. Indeed, we found that the WT transcriptome changes from 60 minutes of starvation to 24 hours of starvation, prompting us to investigate potential differences between WT and CdnLDD transcripts at additional time points ( Dataset S4 ). We decided to assess the WT and CdnLDD transcriptional profiles at 24 hours of starvation, a time point in which CdnLDD remains stable ( Fig. S6A ). At this time point, we found 279 genes to be 2-fold differentially regulated between WT and CdnLDD, 21 of which were also differentially expressed after 60 minutes ( Fig. 3D and Dataset S5 ). Of these genes, 177 were more highly expressed in WT, including the ribosome silencing factor again, while 102 were more highly expressed in CdnLDD ( Fig. 3D and Dataset S5 ). For the genes more highly expressed in WT, DAVID functional annotation analyses revealed a relative upregulation of transcripts associated with cell projections, such as flagella and pili ( Fig. 3D and Dataset S5 ). Lower relative expression of these transcripts in CdnLDD could suggest that CdnLDD is continuing to promote cell cycle progression in spite of (p)ppGpp accumulation, which normally slows the swarmer-to-stalked transition and leads to an accumulation of flagellated swarmer cells ( 2 ). Of the genes upregulated in CdnLDD compared to WT, many were associated with ribosomes, including translation initiation and elongation factors ( Fig. 3D and Dataset S5 ). The production of ribosomes, and protein synthesis in general, is typically reduced under starvation conditions to slow growth and allow resources to be diverted to amino acid biosynthesis ( 30 – 32 ). Additionally, transcripts associated with metabolic pathways, such as the electron transport chain (ETC) and tricarboxylic acid (TCA) cycle, were found to be enriched in CdnLDD, suggesting increased flux through these pathways ( Fig. 3D and Dataset S5 ).
We also wanted to understand if CdnL stabilization could affect transcription during the adaptation phase when nutrients are replenished after a long starvation. Therefore, we assessed the transcriptome of WT and CdnLDD 60 minutes after the addition of glucose, a time point in which WT CdnL has not returned to basal levels ( Fig. S6A ). After replenishing glucose and allowing the cells to recover for 60 minutes, 199 genes were found to be differentially regulated ( Fig. 3E and Dataset S6 ). In addition to again finding a relative upregulation of transcripts associated with cell projections in WT, DAVID analyses also showed an enrichment of transcripts for genes involved in chemotaxis and oxidative phosphorylation ( Fig. 3E and Dataset S6 ). Transcripts associated with holdfast synthesis ( hfsA ) and attachment ( hfsB ) were also upregulated ( Fig. 3E and Dataset S6 ). Together, these observations suggest that the WT cells were responding to nutrient availability by resuming growth, cell cycle progression, and polar development. For CdnLDD, ribosome-related transcripts were again found to be upregulated ( Fig. 3E and Dataset S6 ). Interestingly, additional transcripts enriched in CdnLDD point to an upregulation of genes involved in the DNA damage response and repair pathways, including the alkylated DNA repair protein alkB (CCNA_00009), the SOS-induced inhibitor of cell division sidA (CCNA_02004), an oxidative DNA demethylation family protein (CCNA_00745), the bacterial apoptosis endonuclease bapE (CCNA_00663), and the DNA replication and repair protein recF (CCNA_00158) ( Fig. 3E and Dataset S6 ). Upregulation of these transcripts could suggest DNA damage in CdnLDD.
In summary, these RNA-seq analyses reveal that CdnL stabilization causes increased levels of transcripts associated with ribosomal and protein synthesis genes under both nutrient-rich and carbon-starved conditions, as well as an increase in transcripts associated with metabolic pathways after 24 hours of starvation. During nutrient-repletion, CdnL stabilization leads to an upregulation of transcripts related to DNA damage response and repair proteins.
CdnL clearance during carbon starvation is necessary for efficient outgrowth upon nutrient repletion
Since we observed transcriptional changes between WT and CdnLDD during starvation and upon nutrient repletion, we wondered if CdnL stabilization impacts the ability of cells to adapt to nutrient fluctuations. Being a global regulator of growth, we hypothesized that CdnL clearance during starvation facilitates transcriptional changes that enables adaptation by ultimately promoting survival instead of proliferation. To this end, we measured the growth of WT and CdnLDD strains both before a 24-hour carbon starvation and after the 24-hour starvation upon glucose repletion. Before starvation, WT and CdnLDD showed similar growth rates, with doubling times of 2.1 and 2.2 hours, respectively ( Fig. 4A and Table S2 ). During the outgrowth from 24 hours of starvation, the strains again had similar growth rates; however, CdnLDD had a slightly longer lag time compared to WT and reached an OD of 0.1 after about 9.3 hours as opposed to 8.3 hours, respectively ( Fig. 4B and Table S2 ).
Because we observed that cells with an inability to clear CdnL have a slight defect in outgrowth from carbon starvation, we wondered how this phenotype might relate to the effects of other transcriptional regulators involved in the stringent response. We repeated these experiments, this time assessing the growth of several mutant strains, including: a strain expressing an RNAP with a (p)ppGpp binding site 1 mutation (RNAP-1) ( 11 ); a strain lacking the transcription factor DksA (Δ dksA ), which normally would bind to RNAP to form the second (p)ppGpp binding pocket during starvation ( 9 , 12 ); a Δ dksA / RNAP-1 strain, thus generating an RNAP that is essentially blind to the major effects of (p)ppGpp; and a CdnLDD/ AdksA / RNAP-1 strain. Before starvation, the RNAP-1, Δ dksA , and AdksA / RNAP-1 strains all exhibited growth rates very close to that of WT and CdnLDD, while the CdnLDD/ Δ dksA/ RNAP-1 strain had a slightly longer doubling time and lag time ( Fig. 4A and Table S2 ). After the 24-hour starvation, there was a clear striation in outgrowth, with the RNAP-1 strain displaying a similar growth pattern to CdnLDD, followed by Δ dksA and Δ dksA/ RNAP-1. The CdnLDD/ Δ dksA/ RNAP-1 strain showed the greatest lag upon outgrowth, reaching an OD of 0.1 after about 11.6 hours in comparison to 8.3 hours for WT ( Fig. 4B and Table S2 ). From these data, we conclude that the ability of Caulobacter to efficiently adapt to nutrient repletion following a period of starvation requires appropriate stringent response-mediated transcriptional reprogramming, which includes (p)ppGpp binding to RNAP, DksA, and clearance of CdnL.
CdnL clearance during the SR is important for adaptation to nutrient limitation during competition
On its own, CdnL stabilization appeared to cause a slight defect in adaptation to changes in nutrient status ( Fig. 4B and Table S2 ). We also observed transcriptional changes suggesting that CdnL stabilization promotes the transcription of genes that are typically downregulated upon SR activation ( Fig. 3B – E ). Because of these observations, we wondered if cells with a stabilized CdnL would be at a disadvantage if they were forced to compete with WT cells while adapting to changes in carbon availability. We tested this by competing a kanamycin-resistant or spectinomycin-resistant CdnLDD strain, which is unable to clear CdnL during starvation, with a spectinomycin-resistant or kanamycin-resistant WT strain, respectively. This allowed us to ask if the presence of CdnL is detrimental to fitness during starvation when in competition with cells that can effectively clear CdnL upon activation of the SR. We mixed equivalent OD units of the reciprocal strains in either M2G for nutrient-replete conditions or M2 for carbon-starved conditions. After 24 hours, we took a sample of the mixed culture and plated it on both a kanamycin plate and a spectinomycin plate in order to compare colony forming units (CFUs) between the CdnLDD and WT strains. We then either diluted the remaining culture into M2G to keep the cells in an exponential phase of growth, or into M2 to starve the cells.
We first tested the ability of CdnLDD to compete with WT under nutrient-rich conditions only. Over the course of 6 days, the CdnLDD and WT strains formed colonies in ~1:1 ratio, suggesting that CdnL stabilization does not impact fitness when there are nutrients readily available ( Fig. 4C , left). However, after periods of carbon starvation and recovery, the CdnLDD strains formed fewer colonies than the WT strains, indicating that the WT strains outcompeted the CdnLDD strains in the mixed culture ( Fig. 4C , middle). This trend was observed regardless of the initial nutrient composition, and a competitive disadvantage was observed for CdnLDD after a single round of starvation ( Fig. 4C , right). These data suggest that stabilization of CdnL puts cells at a disadvantage when trying to adapt to fluctuating nutrient availability and supports the idea that one way in which Caulobacter adapts to nutrient stress is through clearance of this transcriptional regulator. | DISCUSSION
Adapting to environmental challenges such as nutrient deprivation requires efficient changes in transcription in order to downregulate biosynthetic genes and upregulate those that promote survival. While the direct binding of the SR alarmone (p)ppGpp to RNAP is a major way in which cells alter the transcriptome, other factors play key roles in allowing for an effective response to starvation. For instance, binding of the transcription factor DksA to RNAP has been shown to enhance the transcriptional effects of (p)ppGpp on RNAP ( 12 ); however, it is not only the presence of factors that promote stress responses that enables adaptation to nutrient stress, but also the absence of anabolic regulators that shifts the transcriptional balance from anabolism to survival.
Here, we show that Caulobacter CdnL, a CarD-family transcriptional regulator involved in ribosome biosynthesis and anabolism, is regulated upon SR activation (1315). We find that transcriptional control at the cdnL promoter is not sufficient to control CdnL levels, and that regulation of CdnL protein occurs post-translationally in a manner dependent on SpoT, with (p)ppGpp being sufficient for CdnL clearance ( 23 ). We also show that CdnL is regulated at the protein level by the ClpXP protease, and that mutation of a degradation signal in the CdnL C-terminus stabilizes CdnL levels during the SR. Surprisingly, CdnLDD does not bind the chromosome after 60 minutes of starvation, nor does it globally alter transcription at this time point. Transcriptional changes that arise from CdnL stabilization are most obvious after 24 hours of starvation, where we find misregulation of ribosomal and metabolic genes. Ultimately, we find that clearance of CdnL is physiologically important, as CdnL stabilization causes adaptation defects. We propose that the combined and potentially synergistic actions of SpoT and ClpXP during the SR facilitate rapid clearance of CdnL protein. CdnL clearance allows for adaptation by transcriptionally downregulating ribosome biosynthesis and flux through metabolic pathways, thereby promoting Caulobacter survival when nutrients are lacking. When conditions become favorable again, re-introduction of CdnL enables cells to adapt by reactivating biosynthesis and metabolism, thereby allowing growth to resume.
The connection between the stringent response and the regulation of CdnL and its homologs in diverse bacteria has been a point of confusion. Indeed, transcription of mycobacterial CdnL homolog, carD , was initially shown to be upregulated during starvation, and CarD depletion was reported to sensitize cells to stressors such as starvation ( 20 ). However, recent studies indicate that while carD transcript levels increase due to stabilization of the transcript by the anti-sense RNA AscarD, CarD protein levels ultimately decrease through decreased translation and degradation by the Clp protease. Transcription of ascarD is proposed to be under control of a stress sigma factor SigF, thus connecting regulation of CarD to nutrient stress. This regulation of CarD was shown to help mycobacterial cells respond to various stresses ( 22 ).
Likewise, we find CdnL regulation to be functionally important for Caulobacter cells to adapt to the stress of nutrient fluctuations. Cells producing the stabilized CdnL variant, CdnLDD, have a slight defect upon outgrowth from carbon starvation, which is exacerbated in the absence of DksA and (p)ppGpp binding to RNAP ( Fig. 4B ). We also find that CdnLDD cells are outcompeted by WT cells when subjected to periods of carbon starvation, indicating that CdnL regulation is important for cells to adapt to nutrient stress ( Fig. 4C ).
These adaptation defects are likely caused by the effects of CdnLDD transcription ( Fig. 3B – E ). While we found that after 60 minutes of starvation, CdnLDD does not bind the chromosome and causes a relatively limited number of transcriptional changes, the effects of CdnL stabilization on transcription become more apparent after 24 hours of starvation and during the adaptation phase when nutrients are replenished. CdnL stabilization increases transcription of genes associated with ribosomes and metabolic pathways during nutrient stress. As ribosome biogenesis and maintenance are costly, it would be disadvantageous to promote these processes during times of limited resources. Likewise, promoting metabolic gene expression can lead to increased flux through these pathways during a time when extra metabolic activity would not be favored. Additionally, the redox reactions of the ETC, if inappropriately managed, can create damaging reactive oxygen species (ROS), which can cause DNA damage ( 33 , 34 ). As many DNA damage response and repair proteins are upregulated in CdnLDD 60 minutes after the addition of glucose following a 24-hour starvation, it is tempting to speculate that increased flux through the ETC is creating ROS and causing DNA damage. The repair of damaged DNA coupled with the costly misregulation of ribosomes can put cells with a stabilized CdnL at a disadvantage when trying to adapt to nutrient fluctuations and reinitiate growth.
Regulation of CdnL during stress appears to be a common theme across diverse bacterial phyla. The Borrelia burgdorferi CdnL homolog, called LtpA, was found to be produced at 23°C, a condition that has been widely used to mimic B. burgdorferi in unfed ticks, while levels during incubation of cells at 37°C as well as during mammalian infection were significantly reduced ( 35 ). Deletion of ItpA prevented B. burgdorferi from infecting mice via tick infection, suggesting that LtpA could be important for B. burgdorferi ’s survival within the tick vector and/or transmission to the mammalian host ( 35 , 36 ). Similarly, depletion of the Mycobacterium tuberculosis homolog prevented cells from replicating and persisting in mice ( 20 ). Bacillus cereus homologs were found to be upregulated in response to various stressors and were important in the recovery-response to heat shock ( 37 – 39 ). Because CdnL homologs are broadly found and have been implicated in promoting adaptation and survival under different conditions, we believe that regulation of this transcription factor is a conserved mechanism enabling bacteria to adapt to stress. | In response to nutrient deprivation, bacteria activate a conserved stress response pathway called the stringent response (SR). During SR activation in Caulobacter crescentus , SpoT synthesizes the secondary messengers (p)ppGpp, which affect transcription by binding RNA polymerase to downregulate anabolic genes. (p)ppGpp also impacts expression of anabolic genes by controlling the levels and activities of their transcriptional regulators. In Caulobacter , a major regulator of anabolic genes is the transcription factor CdnL. If and how CdnL is controlled during the SR and why that might be functionally important is unclear. Here, we show that CdnL is downregulated post-translationally during starvation in a manner dependent on SpoT and the ClpXP protease. Inappropriate stabilization of CdnL during starvation causes misregulation of ribosomal and metabolic genes. Functionally, we demonstrate that the combined action of SR transcriptional regulators and CdnL clearance allows for rapid adaptation to nutrient repletion. Moreover, cells that are unable to clear CdnL during starvation are outcompeted by wild-type cells when subjected to nutrient fluctuations. We hypothesize that clearance of CdnL during the SR, in conjunction with direct binding of (p)ppGpp and DksA to RNAP, is critical for altering the transcriptome in order to permit cell survival during nutrient stress. | Supplementary Material | ACKNOWLEDGEMENTS
We thank Allison Daitch and Marie Stoltzfus for the construction of some Caulobacter strains and plasmids. We thank Allen Buskirk and Annie Campbell for helpful discussions about ribosomes. This project was supported by funds from the NIH (NIH-NIGMS Grant R35GM136221 to E.D.G. and Grant R35GM130320 to P.C.) and the Swiss National Science Foundation (SNSF) (project grant 310030_212531 to P.H.V). E.S. was supported in part through the Biochemistry, Cellular, and Molecular Biology training program (NIH-NIGMS Grant T32GM144272). | CC BY | no | 2024-01-16 23:49:20 | bioRxiv. 2023 Dec 21;:2023.12.20.572625 | oa_package/7f/8e/PMC10769358.tar.gz |
|
PMC10769382 | 38187625 | Introduction
Genome scans, where genetic variants across the genome are tested for association with traits of interest, are an important tool to discover insights into the etiology of a trait or disease. Recent advancements in high-throughput technologies make it possible to collect large number of traits in a single individual from a single assay. Examples include studies with transcriptomics, metabolomics, microbiome, etc. A common first step in analyzing these data is to compute a genome scan of each trait. This can be a computational challenge since many thousands of traits may be measured. These computational challenges are magnified when linear mixed models (LMMs), the standard approach for genetically structured populations ( Li and Zhu, 2013 ), are used since LMMs are more computationally demanding than linear models. In this work, we tackle the problem of computing genome scans for a large number of quantitative traits using LMMs with the goal of providing runtimes of a few seconds for populations of modest size.
As an example dataset, consider the liver proteome data from the BXD Longevity Study which measured approximately 35K liver proteins on 150 mice from 50 BXD strains ( Ashbrook et al., 2021 ). Here, the goal is to map the associations between all 35K liver proteins and approximately 7K genetic markers. The standard approach is to fit the LMM linear mixed model for each protein and marker. This amounts to about 245 million LMM fits. If one is using a LMM genome scan tool such as GEMMA ( Zhou and Stephens, 2012 ), then that program has to be run 35K times. In addition to the repetitive work having to run the program repeatedly, the runtime can be overwhelming even if the tasks were distributed and processed in parallel. When using an interactive web such as GeneNetwork ( Sloan et al., 2016 ) speed is key as the user is expecting an answer in less than a minute or even seconds. For interactive analysis, the user may be prepared to sacrifice some accuracy to get a quick overview of the main traits and markers associated with each other; a more accurate and computationally intensive can be done as a follow up. Thus, our goal was to design an algorithm that could complete the analysis of the BXD liver proteome data in a few seconds; we were prepared to make some approximations to accomplish that.
Our implementation, which we call “BulkLMM” (for performing LMMs on a lot of traits “in bulk”), uses ideas for speeding up linear model scans for many traits, combined with techniques for speeding up univariate LMMs, optimization techniques, and efficient implementation using the Julia programming language ( Bezanson et al., 2017 ).
The problem of efficiently computing genome scans for a large number of traits using linear models was tackled by Shabalin (2012) who showed that the scans can be greatly speeded up using matrix multiplication instead of performing scans one by one for each marker and trait. The main reason for the speedup is that there are efficient algorithms for matrix multiplication. The tensorQTL ( Taylor-Weiner et al., 2019 ) and LiteQTL ( Trotter et al., 2021 ) packages used GPUs to speed up the computations further. They use the fact that matrix multiplication uses similar operations with different data, which is an ideal candidate for GPU computation. For computing the scans using LMMs, however, we have to modify the approach used for linear models.
For speeding up the LMM, we use ideas from the FaSTLMM ("Factored Spectrally Transformed Linear Mixed Models") family of algorithms ( Broman et al., 2019 ; Kang et al., 2010 , 2008 ; Lippert et al., 2011 ; Zhou and Stephens, 2012 ). Roughly speaking, FaSTLMM speeds up maximum likelihood estimation by first transforming the data by the spectral decomposition kinship matrix used to express the genetic relatedness of individuals. This effectively transforms the problem into a weighted linear regression problem, which can then be efficiently solved using standard algorithms. To speed the LMMs in bulk further, we use ideas from GridLMM ( Runcie and Crawford, 2019 ) wherein a finite grid of parameter values is considered for optimization. The core idea of BulkLMM is to use highly optimized vectorized and matrix operations whenever possible (as in LiteQTL) and make judicious choices in the FaSTLMM algorithmic pipeline to reduce expensive operations without sacrificing too much accuracy.. In addition to its main functionality designed for fast LMM scans of multiple traits, BulkLMM also provides a fast computation of permutation testing on a single trait ( Abney, 2015 ) and offers features for stabilizing numerical computations.
The remaining article is organized as follows. In Section 2, we outline the modeling framework and general algorithm for fitting the model. In Section 3, we detail our computational methods for speeding up genome scans for multiple traits and give an overview of the methods, including techniques for stabilizing numerical computations. In section 4, we analyze two datasets using our implementation highlighting the runtime performances of our methods in comparison with existing methods. Through our experimentation running our package to perform association mapping on more than 32k expression traits, we demonstrate that BulkLMM has achieved significant runtime improvements over other popular software tools. We end in section 5 by summarizing our conclusions, detailing scenarios suitable for analysis using BulkLMM, and outlining future directions.
Statistical framework
In this section we outline our statistical approach beginning with a description of the LMM, following with the steps required to fit the LMM, and ending with our approach to permutation testing.
Linear mixed models (LMMs).
Consider the situation where we have traits and markers measured on individuals. Let denote the -th trait vector ( ) and ( ) denote the -th marker coded as allele dosage (or taking values between 0 and 1). Assume the following generative model for :
Here, the matrix contains the covariates that are independent of the tested marker , and the vector contains the corresponding coefficients. We let the marker effect be specifically noted by the indices of trait and marker as . Then, ( ) becomes the systematic component of the model.
The random component contributes to the variances in the expression trait, which we assume to come from two subsequent variance components: and . We denote the proportion of total variance explained by genetic variants as and the remaining unexplained variance as . The matrix , usually referred as the kinship matrix, measures pairwise relatedness identical by descent between each pair of two individuals. Here, we further define the heritability parameter , as ), which denotes the ratio of the genetic variance to the total variance. In this way, we may re-parameterize both variance component parameters ( , ) using ( , ), and we emphasize that is bounded in the interval [0,1). We will later explain how such reparameterization facilitates our estimation algorithms. For each genome scan, we aim to test the hypothesis of no marker effect ( ). Essentially, to run genome scans through all pairs of traits and markers, we perform a one-degree-of-freedom test for each pair.
Fitting the LMM.
We are taking a similar approach to the FaST-LMM algorithm. For simplicity of notation, we omit the subscripts and denote simply and as the trait and marker of interest for each test. We define to be the design matrix of the tested marker and additional covariates independent to . The fitting of an LMM consists of the following steps.
Decorrelation.
Given the spectral decomposition of the kinship matrix , where the diagonal matrix contains the eigenvalues of on the diagonal and is the matrix with columns of the corresponding eigenvectors, we rotate the original and by ;
After rotation, the transformed data are distributed as
We denote the ratio of the two variance components by , such that . Then, the covariance structure can be written as
We note that the covariance of the rotated trait is defined by a diagonal matrix with diagonal elements where is the -th eigenvalue of . Therefore, after rotation, we will have independent observations in the rotated trait, each with the heteroskedastic (unequal) marginal variance defined by the two variance components , (in terms of ) and a certain eigenvalue . We may then apply the maximum-likelihood principle, or more specifically, the weighted least-square (WLS) approach for estimating the fixed marker effect and parameters of the two variance components.
Weighted Least-Squares (WLS):.
We write out the log-likelihood function after observing the transformed data ( , ), as
Here, is the diagonal matrix with diagonal elements , where again , for . Assuming that the kinship matrix is given and, therefore, its spectral decomposition is known, we notice that the matrix only depends on the unknown parameter . Given , we derive the maximum-likelihood estimates of the parameters and in closed form:
This step is equivalent to estimation by the weighted regression taking each weight as , .
Optimization of .
Plugging the two closed-form solutions for the parameters and , we notice that the loglikelihood function for the data can be seen as a function of only the single parameter .
In order to better estimate this parameter, we parameterize it using , which has physical meaning as the proportion of variance due to genetic variants from the total variance and is bounded in the interval [0,1).
Solving for the estimate of that maximizes the objective function will finally give us the estimates , , and that jointly maximize the likelihood. We applied a one-parameter optimization algorithm Brent’s method ( Brent, 1971 ) to solve for the estimated value of parameter .
Permutation testing.
Our approach to permutation testing combines the approach of Abney (2015) with Lite-QTL. The essential idea is that given the heritability estimate and, therefore, the weight matrix , we can reweight the observations so that the residuals have zero mean and unit variance. Under the normality assumption, they are also independent under the null. We can permute them several time and reconstruct the trait under the null hypothesis. We can then apply the LiteQTL (matrix multiplication) approach to calculate fit the model under the null.
After being de-correlated and re-weighted, where we used the notation and to denote the tested marker and the corresponding effect. Under the null hypothesis of no marker effect,
Note that the transformed trait measurements are independent but can have unequal means, as the matrix of control covariates are usually not the identity matrix. Regressing out the covariates gives us independent, identically distributed (i.i.d.) residuals that since
Simply permuting observations in allows to generate samples that are i.i.d. under the null assumption. Then, following the procedures of permutation test we perform the evaluation scheme previously demonstrated on each of the permuted trait (vector of the same length as ) to estimate the fixed marker effect and the resulting testing statistic of LOD. Empirical distribution of the LOD scores from such permutation test framework then allows us to derive estimates of the thresholding values for determine the significance of the marker of each scan. | Computational methods
In this section, we detail the computational strategies we used to speed up the computation.
Heritability estimation precision.
The computational complexity of the LMM fitting scheme mainly comes from the estimation of the heritability , which requires solving a one-parameter optimization problem of the objective function on , and the numerical method may be expensive. Once is estimated, the other two parameters have closed forms and can be easily obtained. For the task of scanning multiple markers, using one linear mixed model at a time for testing each marker, the "Exact" estimation, referred to by other relevant work, assumes that is independent from one model to another. By “Exact” estimation, We mean that will be re-estimated for testing each marker. This is a robust but expensive approach, especially when the number of markers is large. As a simple speed-up approach, the "Null" estimation scheme does not re-estimate at each marker, instead using the approximate value under the null with the baseline (non-marker) covariates and applies the same estimate to test all markers. In BulkLMM, for each assumption, we have developed scalable algorithms, all of which perform fast even for scanning a large number of traits and markers. We borrow the names "Exact" and "Null" in the names of these developed algorithms to refer to the two assumptions on estimating each takes.
In the next subsection, we introduce the key computational technique underlying most of the speed-ups in of our proposed algorithms.
Calculating LOD scores using matrix operations.
Let’s assume each trait can be modeled by a simple linear regression with a single covariate; then, based on the fact that the Pearson correlation and the of testing the single independent variable are equal in this case, we may write the score as as a function of the correlation coefficient between each pair of trait and marker .
For a set of traits and markers, we can construct the matrices and with each column in being a trait and each column in corresponding to the marker to be tested. Then, after standardizing the matrices and such that each column has zero mean and unit norm, their pairwise correlations can be efficiently computed by a single matrix multiplication ( Shabalin, 2012 ) where
Finally, to convert the pairwise correlation coefficients to scores, we only need to map each element in by the simple one-parameter formula. By this scheme, we do not have to perform a linear regression for each pair of traits and marker, and calculations of scores of all traits and marker pairs can be done efficiently by operations on matrices for which highly optimized implementations are available.
Accelerating genome scans for multiple traits.
To adapt the matrix multiplication technique for bulk LOD score calculations in linear mixed models, we observe that by de-correlating and re-weighting the original data and using a given matrix , we achieve independent transformed data with uniform error variances. This allows us to apply the efficient approach used in simple linear regression. Therefore, the main difficulty lies in how we can reasonably estimate the weight matrix and, consequently, the heritability parameter for different traits.
For performing scans on a single trait, this matrix multiplication scheme can be applied by replacing the matrix with a single trait column matrix and using the full genotypes at all tested markers to construct matrix . Then, by matrix multiplication and mapping the pairwise correlations, we can efficiently compute the score between the single trait and every marker. This idea has led to our first algorithm by naively scanning one trait at a time with the exact estimate of heritability for each trait.
For convenience in our further demonstration of the various algorithms we developed, we will use with the two subscripts ( , ) to denote the heritability parameter for a particular trait and marker , where (total number of traits) and (total number of markers). Specifically, we use with to denote the heritability under the null model when there is no marker effect for each trait . To also differentiate between the use of the term "Exact" by other relevant work, which indicates that the heritability for each marker will be estimated independently, and our use of the term "Exact" in the method’s name to emphasize that the estimated value of is from optimizing the actual objective function using numerical methods rather than grid-search, we will use the different term "Alt" (versue "Null") to have the same meaning as the "Exact" as the previous work referred to.
Based on exact estimation of .
Assuming for all , our naive approach to extend the use of the matrix multiplication strategy to our linear mixed model case is to construct the matrix in the scheme using one trait at a time while constructing the matrix using all genome markers, which is outlined as follows:
For further speed-ups over our first naive method, it is tempting to think of approaches to construct the matrix using not only one but ideally a fair portion of the total number of traits to be tested, then the runtime is expected to be reduced as a fraction of the runtime of the first algorithm. While such an approach to group up traits may not be feasible by their exact estimates of heritability since parameters from optimizing the objective function independently are very unlikely to be exactly the same, some approaches by relaxing the precision from exact estimation are promising to find common estimates for multiple traits, enabling grouping of those traits for applying one matrix multiplication for bulk calculation of their scores. This approximation idea has motivated us to further improve the execution time of our algorithms, and through our experiments, for most cases, estimating the heritability parameter up to some levels of precision can be sufficient to generate results that are reliably accurate.
Based on grid-approximation estimation of .
The second algorithm we propose, named Bulkscan-Null-Grid , makes additional relaxation on the accuracy required for the results by estimating the heritability of each trait approximately on a grid of finite candidate values ( Runcie and Crawford, 2019 ). In such a manner, we will have multiple traits with the same heritability estimates. Then, the matrix multiplication approach for computing the LOD scores for traits modeled by linear mixed models can be extended to testing multiple traits instead of one, as our proposed algorithm "Bulkscan-Null-Exact" does. The weighted likelihood function values for all traits are computed under different weights each depending on a candidate . The final estimate for each trait is determined as the candidate value that yields the optimal value of the objective function. The next step is to create batches for traits sharing the same heritability estimate from the grid-search step. Within each batch, the traits are used to construct the matrix of responses and to perform matrix multiplication as demonstrated to compute the scores for those traits.
For demonstration of the algorithm, we use the notations to denote input grid of possible values and let be the values of the objective function for each trait evaluated at a given . The overview of the algorithm is as follows.
In Algorithm 2 , it is important to note that each value specifies a weight matrix . Consequently, the step of calculating log-likelihood function values for a fixed can be efficiently executed for all traits through a multivariate weighted regression, using as a response matrix with its columns representing the traits. Then, the of the log-likelihood values for all traits can be seen as a row vector of length . After computing the objective function values for all values of , we can stack the row vector ’s to form a matrix. Therefore, finding the optimal function value and the corresponding parameter value for each trait is done by finding the maximum value in each column of the resulting matrix of log-likelihood values.
Based on grid-approximation estimation of .
The third method we propose, named Bulkscan-Alt-Grid , combines the ideas of the grid-search approach for estimating the heritability and the matrix multiplication approach for efficiently computing scores. It computes the score based on heritability estimated independently from each marker tested for each trait.
A key observation is that for each test of a trait and the design matrix containing the tested marker , from the formula of the score that we can recover
Therefore, under the linear mixed model scheme, for a given value, we can first apply the matrix multiplication to compute the pseudo- for all and . Then since can be calculated easily for all and (essentially because the null log-likelihood does not depend on the specific marker ) also by matrix operations, we can recover the alternative model log-likelihood under a for all and from the above derivation. Specifically, these alternative model log-likelihood values will be stored in a matrix of dimension for each in the grid. By optimizing element-wisely the , we can get the estimated alternative model log-likelihood for each trait and marker , under the optimal value of heritability for each alternative model containing each specific marker . Finally, the true score evaluated under the optimal and for each and can be calculated based on where and are each the optimal heritability estimated from the alternative model containing the marker and the null model, respectively.
Numerical stabilizing techniques.
When performing a large number of genome scans in parallel, the chance of encountering unlikely situations is increased. Therefore, in addition to speed, we have to also pay attention to numerical stability because otherwise, a whole batch of computations can fail because of one unusual trait or marker. We first discuss boundary avoidance, which is focused on techniques to avoid situations when the heritability is exactly 1. We follow with an improvement to heritability estimation using Brent’s method by sudividing the unit interval into subintervals to avoid multiple local maxima.
Boundary Avoidance.
Notice that if the estimated , the objective function maximized at the closed-form, maximum-likelihood estimators and for optimization on can be written as where is a constant term.
Let us look carefully log-likelihood function as the heritability estimate approaches 1. We write the above function as
As approaches 1, approaches 0. The log likelihood will blow up to infinity if there is at least one . That is possible when the kinship matrix is not full rank, for example when two or more individuals have the same genotype at all markers.
To correct the numerical issue of heritability being estimated at 1, we take a Bayesian maximum a posteriori (MAP) approach for estimating the residual variances , by imposing a prior on during its estimation ( Galindo Garre and Vermunt, 2006 ). The prior distribution represents our prior belief that the residual variances are very unlikely to be 0. Specifically, we implement the prior distribution of a Scaled-Inverse- on the , with scale parameter and degrees of freedom , and a support of . Therefore, rather than estimating by maximizing the log-likelihood function, we estimate by maximizing the log posterior distribution, where the posterior is proportional to the product of the prior and the likelihood function of the data:
We take this prior choice mainly for taking the computational advantage of the Inverse- -Normal conjugacy, for easily evaluating the a posteriori without the need to evaluate the integral for marginal distribution of the data ( Gelman et al., 2013 ). Finally, for estimating the heritability , we plug in the MAP estimates of and to the log-posterior as the final objective function and apply numerical optimization methods, similarly as the approach of no added prior.
Sub-regional numerical optimization using the Brent’s method.
When we estimate the variance components of the LMM by optimizing the heritability we apply Brent’s method over [0,1). The Optim.jl package in Julia provides an implementation of this method. However, Brent’s method is sensitive to the initial guess as well as the shape of the objective function and can produce incorrect results if the objective function has more than one local minimum over the optimization interval.
To mitigate these issues, we provide an option to sub-divide the whole optimization region and applying Brent’s method to each sub-interval. The final result is determined by comparing the objective function at the sub-interval roots. The number of sub-divisions is given by the user; more subintervals give greater accuracy at the price of lower speed. In some cases, narrowing down the search space might lead to a better convergence rate for Brent’s method. As the optimization in each sub-interval is independent, even faster computational speed can be achieved if these operations are parallelized. | Results
In order to provide the future users of BulkLMM a comprehensive view of its performances under various scenarios, depending on the sizes of the input data, as well as the options for model evaluation requested by the user, we used BulkLMM to perform two analyses - one on the BXD mice liver proteome traits and the other on the heterogenous stock (HS) rats prelimbic cortex transcriptome. We describe the runtime performance of our algorithms and follow it by an assessment of the accuracy of the algorithms. The goal is to assess the tradeoff between speed and accuracy, so that the user can make informed choices.
Datasets.
The BXD mice liver proteome data consists of individual-level measurements on a total of 32445 liver proteins. The 248 individual samples came from 50 BXD strains genotyped at 7321 markers. This data has a modest sample size with population structure so that the genetic analysis should be done using LMMs.
The second dataset is from 80 heterogeneous stock (HS) rats whose prelimbic cortex transcriptome was measured using 18,416 features. These rats were genotyped at 117,618 markers making it a larger dataset in terms of number of markers.
Runtime performance.
To assess runtime performance of BulkLMM under we executed each BulkLMM method as well as the univariate LMM genome scan in GEMMA on overall (32K for the BXD data and 18K for the HS data) traits and recorded their runtimes. For BulkLMM, we used a 24-threaded Julia session on the most recent stable release of Julia, version 1.9.2, where the optimization of was based on REML and on a grid of 0.1 -stepsize. Since the method in GEMMA for univariate linear mixed modeling takes only one trait at a time, we iteratively ran GEMMA on 1000 randomly selected traits and approximated the runtimes for processing all traits by the execution times for 1000 traits times /1000. Such extrapolation is reasonable since running GEMMA iteratively for GWASs on multiple traits has runtimes that are approximately linear in the number of traits. A summary of the approximated runtime of each method is shown in Table 1 .
Numerical accuracy compared to GEMMA.
We compared the numerical accuracy of our methods to that from GEMMA, which optimizes the heritability at each marker.
We verified that BulkLMM would generate reliable results by comparing the results from 1000 randomly selected traits in the BXD individual liver proteome data. We compared the results from both methods on the scale. As a summary, we report the sample mean of the element-wise absolute difference over the total of number of traits (1000) and markers (7321) in Table 2 . We see that for “alt-grid” approach with a fine (0.01) grid is almost the same as GEMMA, and that “null-grid” method on a coarse (0.1) grid is the fastest, but the greatest approximation error. This is reinforced in Figure 3 which plots the GEMMA output with the output from the “null-grid” coarse grid, and “alt-grid” fine grid approaches.
Runtime and accuracy as a function of -grid size.
One of our approaches to speed the LMMs in bulk is to perform a grid search across a discrete set of points in the interval of [0,1). The performance of our two methods based on grid-search depend on the -grid resolution. For our fastest algorithm, the "null-grid", which is benefited from and therefore has runtimes affected by such shortcut the most, we ran the method and recorded the execution times under -grid of different step-sizes from 0.01 to 0.1 (corresponding to sizes of -grid from 100 to 10), for performing a scan over all 32k traits of the BXD liver proteome data. We plotted the runtime curve, as well as the curve of the mean deviation of scores compared to scores from the null-exact method, as functions of the -grid size, shown in Figure 4 .
We also compared the accuracy of the “alt-grid” method as a function of grid size, and find that while both are quite accurate, a fine (0.01) grid gives results almost identical to GEMMA ( Figure 5 ).
Adjustment of sample relatedness by using LMMs.
The most fundamental reason for favoring the linear mixed models over linear models for GWAS is to control for the genetic relatedness among the sample individuals. We compared the 1000 randomly selected results in the format of , from running BulkLMM "null-grid" and "alt-grid", each with a -grid of stepsize 0.01 with the results computed from simply linear models. Figure 6 plots the comparison of these results. | Discussion
Our BulkLMM package is able to perform genome scans for thousands of traits in moderately-sized populations in a few seconds (5 seconds for the BXD data, 14 seconds for the HS data). These represent speedups of 94 times and 16,000 times respectively compared to running GEMMA one trait at a time. Running GEMMA one trait at a time scales linearly with the number of traits, and may be infeasible for some datasets. We believe our approach makes these datasets tractable and opens up the possibility of interactive analyses in real time for genome scans for high throughput traits.
Trade-off between runtime and precision.
We were able to achieve our fastest speeds with the “null-grid” method which uses two approximations: (a) it estimates the heritability under the null only and (b) it considers a grid of heritabilities. The method then groups the traits by the best heritability on the grid, and then uses matrix multiplication to efficiently calculate the scores. This method was the least accurate of the three algorithms, "null-exact", "null-grid" and "alt-grid", for fitting the LMMs. Our results show precision of the results given by these algorithms as well as the their runtime performances reflecting the slightly different approaches each taken for estimating the heritability. Methods that allow for greater slack in heritability estimation therefore have faster speeds. We let the user choose the approach that fits their needs. For a quick overview, the “null-grid” method is the best, but if greater accuracy is needed, the “alt-grid” method on a finer grid is recommended. Since the speed scales linearly by the number of grid points, we suggest using a coarser grid first before using finer grids.
Performance characteristics.
As for the accuracy of BulkLMM, all of our methods generate reliable results, reflected by the diminutive differences to the results from GEMMA. Based on the results of eQTL analyses of randomly selected 1000 BXD liver proteome traits using BulkLMM and GEMMA, we show that the mean absolute difference in scores is less than 0.02 for our least-accurate but fastest method, and for our most-accurate method which does the exact-LMM similar to GEMMA but taking a grid-search approximating approach, the difference is less than 0.001.
Impact of data size.
Our key computational technique for speeding up genome scans for a large number of traits is to convert the iterative processes to a set of equivalent operations on large matrices, with the sizes of the matrices depending on the number of individuals, phenotypes and genotyped markers. The HS data had a smaller number of individuals and traits, but a much larger number of markers than the BXD dataset. The runtimes for the HS data were much longer, but there is not a simple relationship in the runtimes. In general, we expect the runtimes to also depends on the architecture of the machine (number of threads/cores, CPU speed, bus speeds, and available RAM). For datasets with a very large number of markers, it would be preferable to split up the computation by chromosome or smaller subsets to make the data fit in memory.
Additional features of the implementation.
Although we are focussed on performing LMM genome scans in bulk, we provide some additional features that many users might find useful. First, we provide a permutation testing for a single trait which utilizes the key matrix multiplication approach for efficiently calculating the scores for permuted copies and can perform in real-time. Second, if the residual variances that may not be equal for all individuals, we allow the user to specify a weighting scheme for a weighted LMM. This feature is useful if we have unequal replicates per recombinant inbred line, and we wish to use the strain mean as the trait value. Finally, we provide the option to provide a prior for the residual variance. While our motivation was boundary avoidance, this feature can also be used when summary data is used for the genome scans. We will expand on this topic in a future publication.
Potential limitations.
While we have succeeded in speeding up the process of computing LMM genomne scans in bulk, we made a number of design choices that have consequences.
No missing data.
The most important practical limitation is that we assume that our phenotype and genotype inputs have no misssing data. The user has to either impute or remove individuals/markers with missing data. Genotypes are routinely imputed and should not present a major obstacle. Imputation of the traits may be more challenging, but with high-quality data this should not be a major limitation.
One degree of freedom tests.
Our speedups rely on matrix multiplication, and that approach assumes that we are interested in one-degree of freedom tests for genomewide scans. For most GWAS panels and recombinant inbred panels such as the BXD family that is not a problem. Additional work is needed for situations where two or more degree of freedom tests are needed.
Single kinship matrix for measuring relatedness.
Our LMM framework assumes that genetic relatedness can be adequately adjusted using a single kinship matrix. In some situations (such as when a dominance kinship matrix is desired), that may not be adequate, and additional work would be needed to handle multiple kinship matrices.
Grid search.
The precision of our grid-search methods depend on the shape of objective function as a function of the heritability. If the curvature of the actual function is large near the maximum, then our grid approximation may not fare well. Empirically, this can be tested by using a finer grid and comparing the results. If there is a big change, the finer grid should be used.
Variance independent of mean assumption.
Our implementation is designed for traits whose variance in idependent of the mean. For count data or binary data, alternative approaches need to be devised. Finally our framework is designed for multiple independent quantitative traits. In some situations, a multivariate linear mixed model may be more suitable ( Kim et al., 2020 ).
Implementation in Julia language.
We implemented our software in the Julia programming language ( Bezanson et al., 2017 ) which has computational speed comparable to some lower-level languages such as C, C++, but has clean syntax similar to high-level languages such as Python and R. Our implementation used Julia’s support for multithreading. Further speedups may be possible with the use of GPUs and may be a topic of future investigation. | Genetic studies often collect data using high-throughput phenotyping. That has led to the need for fast genomewide scans for large number of traits using linear mixed models (LMMs). Computing the scans one by one on each trait is time consuming. We have developed new algorithms for performing genome scans on a large number of quantitative traits using LMMs, BulkLMM, that speeds up the computation by orders of magnitude compared to one trait at a time scans. On a mouse BXD Liver Proteome data with more than 35,000 traits and 7,000 markers, BulkLMM completed in a few seconds. We use vectorized, multi-threaded operations and regularization to improve optimization, and numerical approximations to speed up the computations. Our software implementation in the Julia programming language also provides permutation testing for LMMs and is available at https://github.com/senresearch/BulkLMM.jl . | Acknowledgments
Funding
This work was supported by NIH grants R01GM070683 (KWB,SS), P30DA044223 (RWW,SS), R01GM123489 (RWW,KWB,SS,ZY,GF).
Data availability
For the two datasets used during our experimentation, the BXD individual liver proteome and the HS rats mRNA data, both are open to public access and are accessible from the GeneNetwork database at https://genenetwork.org/ . The BXD liver proteome data of BXD Longevity Study can be obtained by using the filename EPFL/ETHZ BXD Liver Proteome CD-HFD (Nov19) or the accession code: GN886. The Prelimbic Cortex mRNA data of HS-Palmer Rats are accessible using the following query in GN:
Species: Rat
Group: NIH Heterogeneous Stock (Palmer)
Type: Prelimbic Cortex mRNA
Dataset: HSNIH-Palmer Prelimbic Cortex RNA-Seq (Aug18)
Get Any: * | CC BY | no | 2024-01-16 23:49:20 | bioRxiv. 2023 Dec 21;:2023.12.20.572698 | oa_package/19/af/PMC10769382.tar.gz |
||
PMC10769392 | 38187742 | Introduction
Genomic sequence-to-activity models predict molecular phenotypes, such as DNA accessibility, transcription factor (TF) binding, histone modifications, and gene expression, directly from DNA sequence. Numerous deep learning models have been trained for these tasks using experimental assay data from a diversity of cell and tissue types collected by consortia such as ENCODE and Roadmap [ 1 - 6 ]. These models vary significantly in their architectures and the sequence context lengths they consider. Nearly all are trained on human reference genome sequences, which lack the variation present in the human population, but still aim to capture causal effects of regulatory variation. Recent methods show strong performance in predicting expression in massively parallel reporter assays [ 5 , 7 ] and enhancing functionally informed fine mapping of expression quantitative trait loci (eQTLs) [ 8 ]. However, some studies have identified shortcomings of these models, including difficulties in capturing distal regulatory information [ 9 ] and inability to consistently predict expression variation across individuals based on their genetic sequence differences [ 10 , 11 ].
Previous efforts to understand the limitations of these models have focused on evaluating them on biological benchmarks. In this paper, we take a complementary approach of estimating the prediction uncertainty of these models, which can help identify the source of model failures. High-confidence (low uncertainty) incorrect predictions suggest issues such as systematic biases or experimental noise in the training data [ 12 ]. Conversely, low-confidence (high uncertainty) predictions, even when correct, could indicate that the training data are insufficient to generalize to unseen test data. For instance, overparameterized models trained on the same data but with different parameter initializations can reach different local minima in the nonconvex loss surface [ 13 , 14 ]. These models may have similar losses on the training set and even test set, but have different predictions on out-of-distribution inputs, reflecting the underlying uncertainty [ 15 ]. Since a primary goal of genomic sequence-to-activity models is to predict the functional effects of regulatory variation, which can viewed as extrapolating to novel sequences not seen at training, distinguishing between these two failure modes can help guide the design of future models. In addition, uncertainty estimation can be used to determine whether high-confidence predictions on some sequences can be trusted, even if the model on average performs poorly. Finally, uncertainty estimation does not require high-quality ground-truth biological data.
Here, we estimate the predictive uncertainty of Basenji2 by evaluating the consistency in predictions made by multiple replicates of the model, trained with different random seeds. Basenji2 makes accessibility, TF binding, histone modification, and gene expression predictions in 128 bp windows using a 131,072 bp sequence as input and ~55kb receptive field. We use the Basenji2 architecture as it is representative of state-of-the-art genomic sequence-to-activity models and is amenable to training multiple model replicates. We characterize prediction uncertainty across model replicates on four types of sequences: reference sequences, reference sequences perturbed with known TF motifs, eQTLs, and personal genome sequences. We broadly observe that models tend to make high-confidence predictions on reference sequences, even when incorrect, and make low-confidence predictions on sequences with variants. | Methods
Ensemble model training.
We train five replicate models using the Basenji2 model architecture, training procedure, and human training data [ 4 ]. We use the Basenji Github repository for model training [ 34 ].
Reference sequence predictions.
To make predictions for held out reference sequences, we average predictions over the forward and reverse complement sequence and 1-nucleotide sequence shifts to the left and right. This same averaging procedure is used when making predictions in all subsequent analysis. For computational tractability, we downsample the held-out reference sequences ten-fold in all downstream analysis. We binarize experimental and predicted activity levels for held out reference sequences by calling peaks separately per track, using the method in [ 34 ]. Briefly, for each experimental and predicted track, we called peaks using a Poisson model parameterized by a corresponding to the mean activity across all 128bp bins and applied a 0.01 FDR cutoff.
Gradient saliency maps.
For the 1308 protein-coding genes whose TSS is within a Basenji2 test sequence, we compute the gradient of the GM12878 CAGE prediction (averaged over the central ten bins) with respect to the input reference sequence nucleotides. We sum the absolute value of gradients in 128 bp windows to obtain non-negative contribution scores per bin.
TF motif activity scores.
Inspired by the motif insertion approach in Yuan and Kelley [ 35 ], we select a set of 100 gene TSSs to use as endogenous background sequences. These 100 genes were held out from the Basenji2 training data and predicted consistently correctly by the replicates across the greatest number of tracks. For each TF in the CIS-BP motif database [ 36 ] and each background sequence, we sample a motif sequence from the TF’s PWM and insert it at a fixed position upstream of the gene TSS. For each model and prediction track, we calculate a TF activity score as the difference in predictions for motif-inserted sequences versus background sequences for the central two 128bp bins (because the TSS is at the junction of these bins). We calculate TF activity scores at four different motif insertion positions – 10bp, 100bp, 1000bp, and 10,000bp – upstream of each gene’s TSS, to assess how consistency in TF activity scores varies for proximal versus distal motifs.
We also consider the effect of mutations on TF activity. For each TF, we simulate a mutation to its motif by selecting the lowest entropy position of its PWM and mutating it to the lowest probability base. We calculate TF mutation activity scores as the difference in predictions for sequences containing mutated motifs versus sequences containing canonical motifs (described above) for the central two 128bp bins.
GTEx eQTLs.
We obtain GTEx v8 eQTLs fine-mapped using SuSiE [ 37 , 8 ] and matched negative variants from the Supplementary Data in Avsec et al. [ 5 ]. We filter out variants which have opposite directions of effect on different genes, as well as variants which are further from the TSS than the receptive field of the Basenji2 architecture allows. Then, for each variant, we retain only the gene-variant pair for the closest gene. To compute variant effect predictions for each track, we subtract the reference allele prediction from the alternate allele prediction across the central three 128bp prediction bins centered at the variant to obtain an absolute SAD (SNP activity difference) score.
Personal genomes.
We predict lymphoblastoid cell line (LCL) gene expression for 421 individuals in the Geuvadis consortium with both phased whole-genome sequencing and LCL RNA-seq data. We focus on the 3259 genes with a significant eQTL in the European cis-eQTL analysis. Following the approach detailed in Huang et al. [ 10 ], we construct personal haplotype sequences with single nucleotide variants inserted. For each haplotype, we average predictions for the GM12878 LCL CAGE track over the central ten bins surrounding the TSS. We then average haplotype predictions to obtain individual predictions. For each replicate, we identify drivers using the approach described in Sasse et al. [ 11 ] for the 100 genes with the highest uncertainty and 100 genes with the lowest uncertainty. We define a gene’s uncertainty by the variance in the cross-individual correlations of the five replicates and only consider genes where the mean of the cross-individual correlation magnitude is at least 0.1. | Results
Uncertainty quantification
In a supervised learning setup, with inputs and outputs related through the joint distribution , we can train a model on a finite dataset with training examples. Adopting a Bayesian framework, we can decompose the predictive uncertainty into data (aleatoric) and model (epistemic) uncertainty [ 16 , 17 ]:
Data uncertainty refers to irreducible (does not decrease as increases) uncertainty in the complexity of the data, for example due to class overlap or measurement noise. On the other hand, model uncertainty refers to reducible (decreases as increases) uncertainty in estimating the true parameters given a finite dataset . Previous methods proposed to approximate the integral in Eq. 1 include variational Bayesian neural networks [ 18 ] and Monte-Carlo dropout [ 19 ]. However, a surprisingly performant and effective approach to estimate predictive uncertainty, even under dataset shift, is to train a deep ensemble of models that differ only in their random seeds [ 20 , 14 , 21 ]:
For Basenji2 [ 4 ], the conditional is the likelihood of under a Poisson distribution whose mean and variance equal the model output, . Therefore, Eq. 2 is a convex combination of independent Poisson random variables and defines a Poisson mixture model (a Poisson mixture model is always overdispersed and therefore not Poisson itself [ 22 ]). In practice, we typically aim to estimate uncertainty on binary classification tasks that involve complicated transformations of model predictions. For example, for eQTL sign prediction, we would like to estimate the probability that the eQTL increases expression: . Unfortunately, there is no computationally tractable method to compute the probability mass function for the difference of two Poisson mixture models. Accordingly, we approximate this probability by determining the fraction of models in the ensemble whose point estimates suggest increased expression:
The approximation in Eq. 3 is coarse. However, since our goal is not to produce rigorous uncertainty estimates but instead to understand the types of sequences for which these models have high predictive uncertainty, it suffices for our analysis. When the probability in Eq. 3 is equal to 0 or 1 (i.e. all replicates predict the same direction change), we say that the replicates are consistent . We further breakdown the consistent category into consistently correct and consistently incorrect when ground truth data is available. When the probability is in (0, 1) (i.e. the replicates disagree in their predicted direction change), we say that the replicates are inconsistent .
Ensemble training
We train 5 Basenji2 replicates using only the human training data from the Basenji2 dataset [ 4 ] and evaluate consistency in the predictions of the replicates. All replicates were trained with the same data and hyperparameters (learning rate, batch size, early stopping, etc.), differing only in their random seeds. Specifically, the replicates differed only in their random parameter initialization, the random sampling of training examples during mini-batch optimization, and random dropping of neurons during training due to dropout. All five replicates have similar performance on the fixed test split of the Basenji2 dataset ( Fig. S1 ). We hypothesize that the gap between the replicates and the Basenji2 model in [ 4 ] is the lack of multi-species training (the original also trains on mouse data). While recent works have ensembled replicate models to improve prediction performance [ 6 , 23 ], to our knowledge, we are the first to train replicate models to assess prediction uncertainty of genomic sequence-to-activity models.
Reference genome predictions are largely consistent across models, even when incorrect
We first assess the consistency between replicates on reference genome sequences held out during training. For each of the 5,313 Basenji2 tracks, we compute the Pearson correlation between predictions from replicates 1 and 2 on held out reference sequences ( Fig. 1a ). We observe high correlation between replicates (median Pearson’s r > 0.9) for all assays, suggesting that the replicates agree on relative activity differences across the reference genome. Predictions for CAGE tracks are slightly less correlated between replicates than predictions for other assays (DNase-seq/ATAC-seq, TF ChIP-seq, and histone modification ChIP-seq). Previous studies have noted that consistency in predictions does not entail consistency in input feature attributions [ 24 , 25 ], which reflect the relative importance of input features to a prediction. However, gradient saliency maps for CAGE predictions (in the GM12878 cell line) for genes unseen during training show that replicates tend to agree on the importance of regulatory regions ( Fig. 1b ).
Next, we ask whether consistently predicted reference sequences are more often correct with respect to the experimental measurements. We binarize the experimental and predicted activity levels for held out reference sequences by calling peaks separately per track ( Methods ). Using these binary peak calls, we classify reference sequences into one of three categories: consistently correct (all 5 replicates agree with the experimental peak label), inconsistent (3 or 4 of the replicates agree with each other), or consistently incorrect (all 5 replicates disagree with the experimental peak label). Most reference sequences are classified consistently correctly (median proportion > 0.8 for all assays), and only a small fraction of all sequences are predicted inconsistently (median proportion < 0.1 for all assays) ( Fig. 1c , left). However, within the much smaller subset of sequences corresponding to experimental peaks, approximately 20% are predicted inconsistently across replicates ( Fig. 1c , center). Strikingly, for peak sequences predicted consistently by all five replicates, a large fraction are consistently incorrect. We acknowledge that reliance on a peak calling threshold is a potential shortcoming, and that the choice of threshold may influence the proportion of sequences in each category.
Lastly, we subset further to sequences containing transcription start sites (TSSs) that are also experimentally determined peaks. On average, across CAGE tracks, ~60% of these sequences are predicted consistently correctly, ~20% are predicted consistently incorrectly, with the remaining ~20% predicted inconsistently ( Fig. 1c , right). In comparison to our analysis of all peaks ( Fig. 1c , center), we observe a similar proportion of inconsistently predicted sequences (~20%). However, for consistent predictions, TSS peaks are predicted correctly much more often than non-TSS peaks.
To explore potential systematic differences in the types of sequences predicted consistently or not, we analyze the predictions for two epigenetic tracks (DNase-seq and CTCF ChIP-seq) in GM12878 cells. We evaluate whether peak sequences in each of the three consistency categories differ on five different attributes: GC content, TSS distance, evolutionary conservation (phyloP), experimentally measured activity level (target), and mean predicted activity level across replicates ( Fig. S2 ). Peak sequences predicted consistently incorrect have significantly lower experimentally measured activity levels. Further, consistently correctly predicted peak sequences show more resemblance to promoters: they are more proximal to the TSS, have higher GC content, and are more evolutionarily conserved. This is consistent with previous reports that current sequence-to-activity models capture gene expression determinants in promoters, but struggle with distal sequences [ 9 ]. On all five attributes, inconsistently predicted peak sequences display characteristics in between those of the consistently correctly and consistently incorrectly predicted peak sequences.
Replicates disagree more on the effect of mutations in TF motifs than on the effect of canonical TF motifs
Deep learning models can learn gene regulatory syntax in part by learning TF motifs in their first layer convolutional filters [ 26 ]. Differences in the learned effects of TF motifs may contribute to inconsistent predictions across replicates. To test this hypothesis, we compute TF activity scores (the difference in predicted activity at the TSS for motif-inserted versus endogenous background sequences) for all human CIS-BP motifs using each replicate at four different fixed positions upstream of each gene’s TSS ( Fig. 2a & Methods ). We focus on the TF activity score sign, reasoning that replicates may have slight differences in TF activity magnitude but should agree on whether a TF increases or decreases activity if they have learned similar gene regulatory syntax. For each prediction track, we compute the fraction of TFs with inconsistently predicted directional effects across replicates ( Fig. 2b , light gray). We observe differences in the consistency of predicted TF activity signs for different assays. When motifs are inserted very proximal (10bp upstream) of the gene’s TSS, we observe that TF activity scores are most consistent for CAGE tracks compared to other assays. We also find that–in general–replicate consistency decreases as the TF is inserted farther from the gene’s TSS, although consistency for DNase and ATAC tracks is relatively stable across different insertion positions. We note that, compared to TSS-proximal regulatory elements, distal regulatory elements tend to have smaller effects on TSS activity, and current models underpredict the effect of distal regulatory elements on gene expression, both of which may contribute to greater inconsistency for distal TF motif insertions [ 9 ].
Since genomic sequence-to-activity models can be used to understand the functional effects of regulatory variation, we consider the effect of mutations in TF motifs. We hypothesized that it may be easier for replicates to learn consistent effects for a canonical TF motif compared to a single base pair mutation in a motif. For each human CIS-BP TF motif, we select a mutation likely to disrupt activity (by mutating the lowest entropy PWM position to its lowest probability base) and compute TF mutation activity scores for insertions at the same four positions upstream of each gene’s TSS ( Fig. 2a & Methods ). For each track, we report the fraction of TF mutations with inconsistent predicted directional effects across replicates ( Fig. 2b , dark gray). For almost all assays and distances, there is greater inconsistency in predicting the effect of a mutation to a TF motif compared to predicting the effect of the canonical motif. The trend is more pronounced for perturbations farther from the TSS. The correlation between consistency of TF mutation activity scores (calculated using perturbations 10bp upstream of each gene’s TSS) and mutation probabilities (based on PWMs) shows that higher probability mutations, which are less likely to disrupt TF binding, have less consistent predictions ( Fig. S3 ). Therefore, by selecting a strongly disruptive (low probability) mutation to each motif, our analysis may represent a lower bound on inconsistency in mutation prediction, as less disruptive mutations are likely to have more inconsistent predictions.
eQTL sign predictions show high replicate inconsistency
To further quantify uncertainty in variant effect predictions, we utilize a dataset of fine-mapped expression quantitive trait loci (eQTLs) from the Genotype-Tissue Expression Consortium (GTEx) [ 27 ] and matched negative variants [ 5 ]. We select thirteen tissues with matched DNase-seq and CAGE tracks among the 5,313 Basenji2 tracks. Using each replicate, we compute variant effect predictions (SAD scores) for all fine-mapped eQTLs and matched negatives ( Methods ). Fine-mapped eQTL predictions show higher pairwise correlations between replicates than matched negative predictions ( Fig. S4 ), suggesting that replicates make more consistent predictions for putatively causal variants.
We focus on replicate agreement in predicting the directional effect of fine-mapped eQTLs (whether the eQTL alternate allele increases or decreases gene expression compared to the reference). Using predicted SAD signs from matching CAGE tracks, 55% of eQTLs (pooled across tissues) have inconsistent predictions (about 50-60% in individual tissues, Fig. S5 ), 29% have consistently correct predictions, and 16% have consistently incorrect predictions ( Fig. 3a ). Of the consistently predicted eQTLs, about 65% are correct. Predictions from DNase-seq tracks are inconsistent for about 40-50% of eQTLs ( Fig. S6a , d ) and, similar to CAGE, about 65% of consistent DNase predictions are correct.
We next investigate whether there are systematic differences between eQTLs with consistent and inconsistent sign predictions. Predicted effect sizes, computed as the mean SAD magnitude across replicates, tend to be larger for eQTLs with consistent predictions when using either the tissue-matched CAGE ( Fig. 3b ) or DNase ( Fig. S6b ) tracks, suggesting small predicted differences between alleles are less likely to be consistent. Additionally, we check if eQTLs with inconsistent sign predictions are overrepresented in certain regulatory regions, using their tissue-matched chromHMM annotation ( Fig. S7 ) [ 28 , 29 ]. However, we find that at least 40% of eQTLs have inconsistent sign predictions in all annotations, irrespective of the annotation’s distance to TSS or whether the annotation corresponds to an active or repressed state.
We then compare the performance of a single replicate to an ensemble majority vote on sign prediction. The ensemble majority vote marginally outperforms a single replicate in 10/13 tissues for tissue-matched CAGE tracks ( Fig. 3c ) and in 5/13 tissues for tissue-matched DNase tracks ( Fig. S6c ). For the six tissues with the most fine-mapped eQTLs, the ensemble often provides a slight improvement for TSS-proximal eQTLs ( Fig. S8 ), consistent with previous results [ 5 ].
Replicates vary substantially in predictions on personal genomes
Recent work [ 11 , 10 ] has found that current genomic sequence-to-activity models are unable to explain variation in gene expression across individuals based on their personal genome sequences. Huang et al. [ 10 ] evaluated four models–Xpresso, ExPecto, Basenji2, and Enformer–using paired whole-genome sequencing and RNA-seq data from lymphoblastoid cell lines (LCLs) of 421 individuals in the Geuvadis Consortium, and found that the models strongly disagree with one another on the predicted direction of genetic effects on expression. Motivated by this observation, we seek to determine whether disagreement between model classes is driven by predictive uncertainty.
We made predictions with the five replicates on 3259 genes with significant eQTLs in the Geuvadis dataset ( Methods ). There is significant disagreement in replicate predictions ( Fig. 4a ): in 2181 (67%) genes, replicates differ in the signs of their cross-individual correlations (Spearman correlation between predicted expression and measured RNA-seq across individuals). Comparing the cross-individual correlations of two replicates, we see that many genes have similar correlation magnitudes even if they have opposite signs, as observed by enrichment along the y = − x diagonal ( Fig. 4b ). This matches the finding in Huang et al. [ 10 ] that different model classes such as Basenji2 and Enformer often make predictions with more consistency in the magnitudes of their cross-individual correlations than in their signs. Our analysis suggests this phenomenon is likely not primarily a result of differences in model architecture or training procedure, but rather a result of predictive uncertainty. As before, we assess if prediction performance can be improved using an ensemble of the five replicates, but find that it does not substantially outperform a single replicate ( Fig. 4c ).
To better understand the source of predictive uncertainty, we compared 100 high uncertainty genes to 100 low uncertainty genes, chosen based on the variance in the replicate cross-individual correlations. Within this subset of genes, for each replicate, we identified drivers–single nucleotide variants that explain most of the variance in predictions–using the approach described in [ 11 ]. Similar to [ 11 ], we identify a small number (1-6) of drivers per gene. Drivers in high uncertainty genes tend to be unique to a single replicate, whereas drivers in low uncertainty genes are often found in all five ( Fig. 4d ). Further, we find that replicates disagree more often on the directional effect of drivers in high uncertainty genes than in low uncertainty genes, even when conditioning on the number of replicates that identify a variant as a driver ( Fig. 4e ). Cumulatively, these observations suggest significant uncertainty not only in determining the directional impact of a variant on gene expression but also in identifying the variants that affect expression. | Conclusion
We analyze uncertainty in the predictions of genomic sequence-to-activity models by measuring prediction consistency across five replicate Basenji2 models, when applied to reference genome sequences, reference genome sequences perturbed with TF motifs, eQTLs, and personal genome sequences. For held out reference sequences, which are similar in distribution to the training data, predictions show high replicate consistency. However, for sequences that require models to generalize to out-of-distribution regulatory variation – eQTLs and personal genome sequences – predictions show high replicate inconsistency. Surprisingly, consistent predictions for both reference and variant sequences are often incorrect.
These results have implications both for the application of current models and the development of future models. In other domains – including biological applications such as protein design – prediction uncertainty estimates have been employed to determine when to trust a model’s predictions [ 30 - 32 ]. As genomic sequence-to-activity models are increasingly applied to variant interpretation and sequence design problems, we believe accurate quantification of prediction uncertainty can be a useful practical tool. Our initial exploration of uncertainty quantification is based on ensemble prediction consistency. Further work is needed to develop rigorous, calibrated uncertainty estimates for genomic sequence-to-activity models; methods which calculate statistically rigorous uncertainty intervals (e.g. conformal prediction), rather than point estimates alone, are a promising direction.
Our characterization of uncertainty also sheds light on the failure modes of current models. The high degree of inconsistency observed for predictions on variant sequences suggests that models are struggling to generalize to these out-of-distribution inputs. Integrating additional sequence diversity into model training, as some works have suggested [ 33 , 11 ], may help overcome this limitation. For example, our TF motif insertion analysis revealed that replicates make less consistent predictions about the effects of mutations to TF motifs compared to the effects of canonical motifs, suggesting that in vitro TF binding assays (i.e. SELEX) may be a useful source of training data. | Authors contributed equally to paper. Co-authorship order was randomly chosen and happens to correspond to love for carrot
Genomic sequence-to-activity models are increasingly utilized to understand gene regulatory syntax and probe the functional consequences of regulatory variation. Current models make accurate predictions of relative activity levels across the human reference genome, but their performance is more limited for predicting the effects of genetic variants, such as explaining gene expression variation across individuals. To better understand the causes of these shortcomings, we examine the uncertainty in predictions of genomic sequence-to-activity models using an ensemble of Basenji2 model replicates. We characterize prediction consistency on four types of sequences: reference genome sequences, reference genome sequences perturbed with TF motifs, eQTLs, and personal genome sequences. We observe that models tend to make high-confidence predictions on reference sequences, even when incorrect, and low-confidence predictions on sequences with variants. For eQTLs and personal genome sequences, we find that model replicates make inconsistent predictions in > 50% of cases. Our findings suggest strategies to improve performance of these models. | Supplementary Material | Acknowledgements
We thank Gabriel Loeb and Ioannidis lab members for helpful discussions. This work was partially supported by the U.S. National Institutes of Health grant R00HG009677, an Okawa Foundation Research Grant, and a grant from the UC Noyce Initiative for Computational Transformation. N.M.I. is a Chan Zuckerberg Biohub San Francisco Investigator. | CC BY | no | 2024-01-16 23:49:19 | bioRxiv. 2023 Dec 23;:2023.12.21.572730 | oa_package/e4/2d/PMC10769392.tar.gz |
|
PMC10771998 | 38185712 | Introduction
The benefits of physical activity for the prevention, management and survival of many adult cancers are well established, as highlighted by multiple systematic reviews [ 1 – 3 ]. Physical activity during and after treatment is safe and acceptable to patients and is endorsed by the World Health Organisation (WHO) and the American College of Sports Medicine (ACSM) [ 4 ]. The WHO states that adults living with cancer should avoid inactivity and aim to achieve at least 150 min of moderate-intensity aerobic physical activity per week and encourage muscle-strengthening activities on at least 2 days of the week[ 5 ].
Evidence has documented the benefits of participation in physical activity during and after medical treatment for improving psychosocial well-being [ 6 ], managing cognitive decline [ 7 ], enhancing chemotherapy completion rates [ 8 , 9 ], reducing the risk of comorbidities such as cardiovascular disease and reducing the risk of recurrence [ 1 , 10 ]. Despite the confirmed benefits of physical activity for the management of breast cancer, it is not routinely integrated as an essential component of cancer care, and internationally only 6% of oncologists refer their patients to physical activity programmes [ 11 ]. Furthermore, 31% of patients in the UK reported being inactive during and after treatment (i.e. completed less than 30 min of moderate-intensity activity a week) [ 12 ] and 43% of patients reported becoming less active following their diagnosis [ 13 ]. Previous qualitative research with cancer clinicians reported several barriers to the implementation of physical activity advice in routine care, which included a lack of training and knowledge on how to do so [ 14 ], time, concerns regarding patient safety [ 15 , 16 ] and lack of accessible patient referral pathways to exercise/physical activity specialists [ 17 , 18 ]. However, further research is required to explore facilitators to the integration of physical activity into the patient pathway. Supporting patients to change their health behaviours and reduce the burden of disease is becoming a greater feature within the role of healthcare professionals (HCPs) through initiatives such as Making Every Contact Count.
Therefore, this study aims to explore patients’ and HCPs’ views and experiences of participating in conversations offering support for physical activity in routine care during treatment for breast cancer. The study will contribute towards the development of future health service policy and implementation of support for cancer patients to engage in a physically active lifestyle to manage treatment-related side effects and reduce their risk of recurrence. | Methods
Design
A qualitative, semi-structured interview study was conducted with breast cancer patients and HCPs. The study is reported in line with the COREQ guidelines (see supplementary document ). Ethical approval was granted by the Loughborough University Research Ethics Committee (Ref: 2022–5729-10266). Data collection for this study took place between July and November 2022.
Participants and recruitment
Healthcare professionals
HCPs providing cancer care in the UK (i.e., including oncologists, surgeons and cancer nurses) were invited to take part in an online semi-structured interview. Snowballing sampling methods were used to recruit across the UK via special interest groups, mailing lists and social media.
Patients
Patients who were undergoing treatment or had completed treatment for breast cancer in the UK were invited to take part using snowball sampling methods. The study was advertised via social media, cancer support groups and newsletters. Initially, we aimed to recruit patients who had received their cancer diagnosis within three years of the interview. However, after the initial few interviews, it became apparent that those diagnosed within the three years experienced a change in routine cancer care due to disruptions caused by the COVID-19 pandemic. Therefore, a broad inclusion criterion was adopted where all breast cancer patients, regardless of time elapsed since diagnosis, were eligible to take part in this study.
Data collection
Informed consent was taken verbally or via email prior to the interview which took place via MS Teams or telephone, based on participants’ preference. The duration of the interviews ranged between 20 and 50 min. Only the participant and the interviewer (KG) were present for the interviews and there was no prior relationship.
Two semi-structured interview schedules (one for HCPs and another for patients) were developed by KG and AJD in line with the aims of the study. Interviews focused on understanding perceptions of physical activity, potential benefits, current practice for discussing physical activity in cancer care and facilitators to routine conversations within consultations. Demographic characteristics and current levels of physical activity using the exercise vital signs questionnaire [ 19 ] were gathered from patients to gain insight into their typical levels of physical activity.
All interviews were conducted by KG, audio-recorded and transcribed verbatim. Reflective notes were taken after each interview to summarise key themes and to inform iterations to the interview schedule where necessary. KG is a female senior research fellow with a PhD and expertise in mixed methods research with a range of clinical and non-clinical populations including those with breast cancer. KG is positive about promoting physical activity for the prevention and management of cancer and recognised that this could bias interpretation of the results; thus, CDM (who does not research cancer rehabilitation) provided peer-to-peer debriefing [ 20 ] when analysing the results.
Data analysis
All transcripts were anonymised and uploaded to NVivo 12. Inductive thematic analysis was followed to identify and analyse reporting patterns within the data. Coding and analysis were conducted by KG in parallel with data collection. Interviews were stopped when it was deemed that no new themes were occurring. Interviews with patients and HCPs were analysed separately but were presented together as themes overlapped. Emerging themes were discussed with AJD and CDM regularly throughout data analysis and were supported by the reflective notes. Emerging themes were reviewed by KG and CDM and thematic mapping (see Fig. 1 ) was used to ensure an accurate representation of the data to move beyond description and support the development of the theme. Minor modifications were made to the titles of themes and subthemes during this iterative process, and no substantial disagreements in themes or subthemes were noted. | Results
Participant characteristics
Fifty-four patients and twenty-one HCPs expressed an interest in participating in this study and 26 consented: 11 HCPs (Table 1 ) and 15 patients (Table 2 ). All patients were female and had received a diagnosis of breast cancer in the UK within 6 months to 10 years of the date of the interviews. Patients had a mean age of 59 years (SD 9.6), most were employed (40%), Caucasian (90%) and received their treatment exclusively via the National Health Service (NHS) (67%). On average, patients self-reported that they completed 210 min (SD 132.9) of physical activity per week. Most HCPs were female (91%), with experience ranging between two and nineteen years and included one surgeon, five oncologists and five cancer nurse specialists across seven NHS healthcare trusts in the UK.
Themes
Themes between HCPs and patients were very similar; therefore, combined themes exploring current practice and the perceptions of integrating conversations about physical activity in routine cancer care are presented. Three themes emerged (with 14 subthemes); current practice, implementation in care and training (see Fig. 1 ).
Theme 1: Current practice
Limited physical activity advice
Most patients and HCPs were not aware of the cancer-specific physical activity guidelines and most patients did not recall receiving information about the benefits of being physically active during or after treatment for cancer (Table 3 ). Those who recalled conversations about physical activity were patients who received their treatment via private healthcare (discussed further below) and only two NHS patients (from the same Trust) recalled being asked about their physical activity levels by their medical team.
Reactive conversations
When physical activity was discussed by HCPs with patients, conversations were “ usually a reactive discussion ” (HCP11) in response to patients first raising the topic and ‘... pushing it more” (P01) or when “... it comes up in conversation...then then I would talk about it” (HCP9). HCPs noted that they tended to discuss physical activity with patients who had a personal interest in the activity and were seeking permission to remain active through their cancer treatment. HCPs avoided raising the topic of physical activity as they did not have the resources or infrastructure to provide their patients with sufficient support: “ At the moment we have no solution, so it’s actually it makes us feel bad to raise a problem when you can’t offer the patient support and solutions, so I suspect that’s why it’s not being raised at all” (HCP7).
Perceived benefits- in the NHS
The perceived benefits of physical activity generally were understood by most patients, with very little knowledge expressed about the specific benefits physical activity can have on cancer treatment-related outcomes. Patients referred to the potential benefits of activity for their mental and physical health, fatigue or sleep: “ We’d have a walk, and just doing that though was lovely because mentally it did me a lot of good” (P05) and “ you kind of think I just need to rest, but actually it helped me manage my fatigue a lot more” (P01). Similarly, when HCPs spoke about the benefits of physical activity, most referred to psychosocial well-being as opposed to managing cancer treatment-related outcomes. Only one HCP mentioned that physical activity may be beneficial for reducing patients’ risk of recurrence.
Perceived benefits- in private care
A subsample of patients ( n = 5) who received private treatment were offered guidance from their healthcare providers and given practical support, enabling them to be active during treatment. When discussing the importance of physical activity, patients treated privately referred to the role of activity in improving their prognosis and survival following treatment, in addition to the psychosocial benefits. For example, “ We know as well from research that exercise after cancer can reduce your risk of it coming back by 50%” (P013), and another patient discussed the importance of resistance-based training: “It’s really good for your bone density...you know chemo knackers your bone density and it can bring on osteoporosis” (P14).
Theme 2: Implementation of care
The second theme includes seven subthemes that explore the perceived components necessary for the successful implementation of routine conversations about physical activity within breast cancer care. The subthemes identified through joint analysis of data include (1) remote resources, (2) home-based activity, (3) credible source (4) inclusivity, (5) changing perceptions (6) frequency and timing of conversations and (7) social support.
Remote resources
Patients are often overwhelmed with information provided in medical consultations, and therefore, having materials that they could refer to in their own time with family or friends was perceived as important. Both patients and HCPs mentioned the potential benefits of promoting activity using a mobile phone application to encourage engagement with the information and self-monitoring of behaviours: “ You could make an app, could make that interactive like patients could log their activities and then see in a month’s time or in a during a weekend which they did like” (P08). We could say, “ Here’s an app that would potentially give you some information about what you can and can’t do...I think that would be really great actually, because I don’t think tagging on at the end of a booklet or giving them another full booklet about it would necessarily work” (HCP6).
Home-based
Participants from both groups expressed concern regarding structured exercise and commented on the importance of flexibility in the type and location of physical activity promoted. Breast cancer patients often spend a lot of time attending hospital visits, which can be both time-consuming and tiring. Therefore, additional scheduled or supervised classes were viewed as creating barriers to attendance. Home-based, flexible activities were preferred by most patients. Similarly, attendance at classes located in public spaces were less appealing due to the increased risk of infection during treatment. Lastly, home-based activity was perceived as less daunting as it would minimise the need for interaction with other people at a time when patients have reduced self-esteem and additional concerns regarding their body image and hair loss: “ If it was in their own home. It was very safe for, you know, and they didn’t have to be in a place where people could see them...and they didn’t actually have to go anywhere to do it” (HCP5).
Credible source
Patients expressed preferences for having evidence-based information about physical activity and guidance from a trustworthy source at a time when they are vulnerable: “ If you can say to them this is evidence based, this is how much you know it decreases your fatigue ” (P13). Likewise, HCPs were conscious that patients trusted the information they provided and were more likely to perform the behaviour when introduced by a professional: “...if there was an endorsement from a healthcare professional it was much more likely that patients were interested in participating” (HCP6).
Inclusivity
When specifically asked, there was a clear indication that patients and HCPs believed that “the advice should be for everyone” (P09), and there was no need for additional structured physical activity assessments prior to its promotion as “... there’s probably a way in which they could all be physically active... .. even if its just small changes ” (HCP6). While also acknowledging that physical activity should be tailored to each individuals’ abilities. “ It’ll have to be sort of tailor made to their abilities” (HCP10).
Changing perceptions
Both patients and HCPs mentioned the importance of engaging with and changing the perceptions of loved ones, who often show their support by asking patients to rest. Patients expressed that they were “ constantly told by my family, don’t overdo it ” (P15) or “conserve your energy and you shouldn’t exercise while you’re on treatment ” (P03), and therefore, requested materials for family members outlining “ any kind of information in printed form that I could show them would have been helpful ” (P15).
Frequency and timing of conversations
Patients and HCPs both commented on a preference for multiple conversations about the importance of physical activity “might need to be given more than once” (P05) throughout patients’ treatment pathway. It was felt that patients should be made aware of the importance of physical activity before they begin treatment while acknowledging that they may engage in physical activity at different time points during their treatment.
Social support
Patients commented that the ability to connect with others with shared experiences could facilitate physical activity: “ If you had people on the app and you know, they can become friends during treatment and then follow each other on it and then encourage each other” (P14).
Theme 3: Training for healthcare professionals
Iterations were made to the HCP interview schedule based on the recurring theme of additional training to support conversations about physical activity within routine cancer care. Analysis revealed four subthemes including the following: (1) requests for training, (2) evidence-based information, (3) guidance and (4) method of delivery.
Requests for training
HCPs expressed an interest and need for further training to confidently raise awareness and deliver brief conversations about physical activity. For example, “ Maybe just a kind of a training package and you know, I think there’s some nurses out there probably still don’t really know how important it is, so really kind of selling it ” (HCP4).
Evidence-based materials
Credible information to inform brief conversations about physical activity was also requested; for example, “ I think some of the key evidence base that supports it, ‘cause you know, we’re very evidence driven in oncology ” (HCP8).
Guidance
The need for guidance regarding the type and duration of physical activity, as well as appropriate terminology and timing to support the integration of brief conversations about physical activity, was also noted by several HCPs. HCP1: “I think it’s how to incorporate it and when to incorporate it. So I think that’s where the training or teaching would be helpful to know when to integrate it to patients and how to integrate it.”
Method of delivery
There was a preference for a training module that could be delivered remotely as opposed to the need to attend a scheduled event in person. HCP11: “If you’ve got to make a physical trip somewhere, I think it just adds travel time to an event, then it can mean that makes the difference between something being feasible or not. So I think a remote.” | Discussion
Summary of findings
This study provides useful insight and practical guidance to support the integration of physical activity into the pathway for all patients with breast cancer. Findings confirm that physical activity is not routinely discussed with all patients who receive treatment for breast cancer in the UK. HCPs are reluctant to discuss physical activity, yet commented that they would be comfortable promoting it, following training, as well as access to clear, evidence-based resources to support such conversations with their patients. Both patients and HCPs raised concerns about the logistics and safety of attending group based, scheduled activity sessions in public gyms and favoured home-based, self-managed activity programmes.
Interpretation of findings
Most patients and HCPs in our study were unaware of the physical activity guidelines for those living with and beyond a cancer diagnosis, and the recommendations were not routinely discussed in consultations. Conversations with patients about physical activity were reactive, and therefore, breast cancer patients who are physically inactive and most likely to benefit from receiving support [ 22 ] in their consultations are currently not supported to become more active. Given the side effects of cancer treatment and that most women diagnosed with breast cancer are aged 50 years or older, it is particularly important that all women are advised to engage in strength-based physical activity twice per week in line with guidance [ 5 ] to reduce their risk of osteoporosis and maintain muscle mass [ 23 , 24 ]. Furthermore, patients who were actively educated about the benefits of physical activity were those who received private healthcare, raising questions about further health inequalities for this population. Providing guidance for all patients to raise awareness and improve knowledge regarding all the health benefits that physical activity provides is critical for reducing health inequalities and reducing the risk of recurrence.
Requests for physical activity guidance in routine care
HCPs were reluctant to raise the topic of physical activity without having adequate knowledge and resources to provide a solution to reducing inactivity among their patients. Participants from this study echoed similar barriers to promoting physical activity in cancer care as noted in previous studies [ 14 , 17 , 18 ]. Yet both patients and HCPs commented on the necessity and importance of conversations about physical activity for improving health outcomes within the patient pathway. Our findings emphasise the willingness of patients and HCPs to integrate physical activity into routine cancer care. These findings support the ACSM’s (5) call to action, highlighting the importance of implementing exercise as medicine within oncology. Findings echo the requirement for oncology clinicians, who are recognised as trustworthy and credible sources of information among patients, to advise patients to increase their physical activity.
Integration of physical activity in routine care
HCPs in our study believed that all patients treated with breast cancer could engage in physical activity and acknowledged that small bouts of activity may be more realistic for the most inactive, in line with current WHO recommendations [ 25 ]. HCPs commented that all patients could engage in some form of physical activity regardless of their abilities, suggesting that prior assessment of physical activity was not essential for its promotion within the cancer pathway for those treated with breast cancer. In line with previous evidence, brief conversations and multiple contacts from HCPs to support health behaviour change [ 26 ] were deemed essential to prevent overloading patients with information.
Remote, self-managed resources
Previous literature noted the lack of time in consultations as a key barrier to implementation [ 14 ] but was not expressed as a concern by HCPs in our study. However, participants commented on the facilitators of physical activity promotion through methods that are brief and self-efficient i.e. referral toward evidence-based resources such as a mobile application, which require minimal clinical burden. Emphasis was placed on providing patients with the ability to self-manage their activity as opposed to the need to attend structured rehabilitation. Self-managed, home-based activities were preferred due to their flexibility and potential to minimise body-image concerns associated with attending public spaces and reducing the risk of infections.
Training
Consistent with other studies, most HCPs were unaware of the cancer-specific physical activity guidelines and therefore identified training as an essential element to the successful integration of physical activity. Requests were made for remote training modules, which outline the evidence and provide practical guidance and access to remote resources to facilitate effective signposting.
Implications of work
Our findings indicate that brief conversations, delivered by a credible source and followed up at multiple stages of the treatment pathway promoting physical activity, should be integrated into the treatment pathways for all patients treated with breast cancer. The provision of remote resources would provide HCPs with the confidence to promote physical activity whilst acting as a credible resource for patients living with and beyond cancer. Mobile applications are scalable, with the potential to reduce health inequalities while providing patients with the resources to promote self-management of home-based physical activity, which are perceived as facilitators to behaviour change during cancer treatment.
Strengths and limitations
The in-depth interview approach with a broad sample of patients and HCPs across multiple NHS Trusts and private healthcare settings are the strengths of the current study. Furthermore, the range of expertise in the study team is a strength as it allowed the critical exploration of current practice and identification of optimum methods for integrating conversations about physical activity into routine cancer care. Our patient sample differed across a range of demographic characteristics, yet the majority self-reported above-average levels of physical activity; therefore, our sample may not represent patients who are physically inactive. While overestimation of self-reported physical activity is typical, above-average levels of physical activity among our sample of patients may be due to selection bias and is a limitation of the study. Therefore, findings should be interpreted with caution, and future research should place further emphasis on engaging with inactive patients.
Conclusion
Many HCPs who offer cancer care are reluctant to raise the topic of participation in physical activity, yet patients would welcome discussions. Providing HCPs with education regarding the benefits of physical activity along with evidence-based, low-cost, remote interventions would allow them to integrate conversations about physical activity within routine breast cancer care for all patients, ultimately improving treatment outcomes and reducing their risk of recurrence. | Conclusion
Many HCPs who offer cancer care are reluctant to raise the topic of participation in physical activity, yet patients would welcome discussions. Providing HCPs with education regarding the benefits of physical activity along with evidence-based, low-cost, remote interventions would allow them to integrate conversations about physical activity within routine breast cancer care for all patients, ultimately improving treatment outcomes and reducing their risk of recurrence. | Objective
The benefits of physical activity across the cancer continuum for many adult cancers are well established. However, physical activity is yet to be routinely implemented into health services throughout the world. This study aims to explore patients’ and healthcare professionals’ views about integrating conversations and support for physical activity into routine care during treatment for breast cancer.
Methods
Healthcare professionals and patients from across the UK living with or beyond breast cancer were invited to take part in semi-structured interviews that were conducted online. Recruitment for the study was advertised on social media, in cancer support groups and newsletters. Data were analysed using inductive thematic analysis.
Results
Three themes captured perceptions of integrating support for physical activity in routine breast cancer care among 12 health care professionals (who deliver breast cancer care) and 15 patients. Themes between healthcare professionals and patients overlapped, and therefore, combined themes are presented. These were: (1) current practice; (2) implementation in care and (3) training needs.
Conclusion
Many healthcare professionals who offer cancer care are reluctant to raise the topic of physical activity with patients, yet patients have suggested that they would like additional support to be physically active from their medical team. Providing healthcare professionals with education regarding the benefits of physical activity to reduce the risk of recurrence along with evidence based low-cost, remote interventions would allow them to integrate conversations about physical activity within routine cancer care for all patients.
Supplementary Information
The online version contains supplementary material available at 10.1007/s00520-023-08293-2.
Keywords | Supplementary Information
Below is the link to the electronic supplementary material. | Author contributions
KG developed the original idea for the study with support from AJD. KG led the study and conducted data collection. KG conducted data analysis with substantial input from CDM and AJD. KG wrote the manuscript with input from CDM and AJD. All authors read and approved the final manuscript.
Funding
AJD is supported by a National Institute for Health Research (NIHR) Research Professorship award. This research was supported by the NIHR Leicester Biomedical Research Centre. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care. The funding bodies had no influence in the design of the study nor in the collection, analysis, and interpretation of data, as well as the writing of related manuscripts.
Declarations
Ethics approval
Favourable ethical opinion was granted by Loughborough University Research Ethics (reference number: (Ref: 2022–5729-10266). All methods were performed in accordance with the relevant guidelines and regulations. All participants provided written informed consent for study participation.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:34:58 | Support Care Cancer. 2024 Jan 8; 32(1):87 | oa_package/e6/47/PMC10771998.tar.gz |
PMC10772494 | 38193025 | Commentary
Invasive lobular carcinoma of the breast (ILC) presents distinct difficulties in diagnosis and clinical management. Detection is a particular challenge because ILC typically does not form a palpable mass and is often difficult to image, including by mammography and positron emission tomography/computed tomography (PET/CT). Recent studies indicate that PET/CT imaging with 18 F-fluorodeoxyglucose (FDG), a mainstay in managing metastatic breast cancer, has limited utility in metastatic ILC [ 1 ]. A recent study by Olukoya and colleagues [ 2 ] offers insight toward leveraging this clinical liability into therapeutic opportunity.
FDG-PET/CT leverages the high avidity of most metastatic lesions for glucose and thus the FDG tracer. However, FDG uptake in metastatic ILC is limited, and many lesions are not detectable. In a head-to-head comparison of 18 F-FDG with 18 F-fluoroestradiol in 7 patients with estrogen receptor α (ER)-positive ILC, 268 bone lesions were detected by either method, with 94% being 18 F-fluoroestradiol-positive but only 34% being FDG positive [ 1 ]. This limited FDG avidity suggests that ILC rely less on glucose than other breast cancers; several studies to date support that ILC has a distinct metabolic phenotype [ 3-6 ], yet studies of how ILC may differentially utilize or rely on other fuels like fatty acids or amino acids are in their early stages. Understanding and ultimately leveraging the unique metabolism of ILC is important in better understanding ILC etiology and improving patient care.
Olukoya et al build off prior work from the Riggins Laboratory identifying that metabotropic glutamate receptors (GRM), ie, G-protein coupled receptors activated by extracellular glutamate, contribute to antiestrogen resistance in ILC cells [ 7 ]. Stires et al showed that targeting GRM with riluzole improved response to the antiestrogen fulvestrant in the endocrine-resistant ILC model LCCTam (derived from SUM44PE). Riluzole is a Food and Drug Administration-approved treatment for amyotrophic lateral sclerosis that inhibits glutamate release and is used to starve GRM of extracellular glutamate and/or directly inhibit GRM. Targeting glutamate signaling is intriguing in ILC in part due to clinical imaging data that contrasts the shortcomings of FDG-PET; 18 F-fluciclovine, an animo acid analog imported into cells via glutamine transport and thus glutamate metabolism, is readily taken up by ILC [ 8 ]. In the current study, Olukoya et al expand the observations of riluzole efficacy in LCCTam to multiple ILC and antiestrogen-resistant ILC models (in vitro and in vivo), compare efficacy to models from invasive ductal carcinoma (ie, breast cancer of no special type), and test riluzole in patient tumor explant cultures. Notably, since ∼95% of ILC are ER-positive, comparing riluzole to fulvestrant or the combination is a key feature of the study. These new data support that riluzole indeed has a distinct impact on ILC cell proliferation and survival compared to IDC that should be further developed to target and exploit metabolic signaling in ILC. Importantly, these data also highlight that the cross-talk between ER signaling and metabolic signaling may confound efforts to target ILC metabolism, despite antiestrogens being central to ILC treatment, and that antiestrogen resistance may create distinct metabolic states that will be differentially responsive to metabolic inhibitors.
Riluzole treatment caused an ILC-specific cell cycle arrest in G2/M, which was observed in 3 ER+ ILC cell lines and 2 associated antiestrogen-resistant variants, contrasting a G0/G1 arrest in MCF7 (no special type) and nontransformed MCF10A cells. Cell cycle arrest was associated with an increase in apoptotic cells in 1 of the parental/anti-estrogen resistant ILC pairs. Importantly, the observation that riluzole induced cell cycle arrest in both antiestrogen-resistant ILC models tested suggests that targeting GRM may be effective in both ER-dependent and -independent settings, as LCCTam and MM134:LTED were previously shown to be fulvestrant responsive and resistant, respectively. Riluzole also had single-agent efficacy against HCI-013EI (estrogen-independent ILC) patient-derived xenograft in vivo growth, with 3/5 tumors growth-suppressed and exhibiting an increase in caspase-3-mediated apoptosis. Despite this potential mechanistic separation from ER, riluzole and fulvestrant combined showed minimal additive effect on growth suppression in vitro, and the combination also had limited increase in efficacy over fulvestrant alone against HCI-013EI in vivo growth. Future studies will need to consider the disparate cell cycle effects of fulvestrant and other antiestrogens, which primarily cause G0/G1 arrest, vs the ILC-specific G2/M arrest caused by riluzole and whether the former antagonizes apoptotic effects of the latter. Similarly, in causing cell cycle arrest, antiestrogens suppress and remodel metabolism in ER+ breast cancer cells including ILC cells [ 6 ], which may limit metabolic demands and potentially the efficacy of blocking metabolic signaling. Careful sequencing may be particularly critical for combining riluzole or related drugs with antiestrogens. Though regulation of glutamate signaling by ER or antiestrogens in ILC remains to be directly studied and is an important future direction, fluciclovine uptake was reduced by neoadjuvant chemotherapy in ILC [ 8 ], supporting that at least chemotherapy can suppress glutamate signaling. However, riluzole combined with fulvestrant clearly merits further investigation in ILC—in the patient tumor explant studies, combination treatment increased apoptosis over single agent specifically in the ILC tumor among the 5 explants tested and not the other breast tumors.
Substantial clinical and laboratory data, in particular clinical imaging studies, support that ILC has a distinct metabolic phenotype wherein tumors are likely less reliant on glucose and more reliant on other metabolic pathways. This new study from Olukoya et al and the Riggins Laboratory builds key insight that targeting glutamate signaling with riluzole has distinct impacts on ILC cells relative to other breast cancer cells, which can form a foundation for precision treatments targeting ILC metabolism. Future studies will need to address the undoubtedly complex interplay between cell metabolism, ER signaling, and antiestrogen response and resistance. A better mechanistic understanding of GRM signaling and glutamate metabolism in ILC has the potential to lead to ILC-tailored combination treatments targeting ILC-specific metabolism and can identify other related metabolic vulnerabilities in ILC. | Funding
This work was supported by the Office of the Assistant Secretary of Defense for Health Affairs through the Breast Cancer Research Program under Awards W81XWH-22-1-0715 (M.J.S.) and W81XWH-22-1-0716 (J.H.O.). Opinions, interpretations, conclusions, and recommendations are those of the author and are not necessarily endorsed by the Department of Defense.
Disclosures
J.H.O. is an editorial board member for the Journal of the Endocrine Society . | CC BY | no | 2024-01-16 23:36:46 | J Endocr Soc. 2023 Dec 26; 8(2):bvad171 | oa_package/b3/d9/PMC10772494.tar.gz |
||||||
PMC10775316 | 38196637 | Methods (Online)
Subjects
We used male and female Fos-mRFP +/− transgenic rats on Wistar background (NIDA transgenic breeding facility, n = 54), weighing 250–450 g in all experiments. We group-housed rats (two to three per cage) in the animal facility and single-housed them before the experiments. For all experiments, we maintained the rats under a reverse 12:12 h light/dark cycle (lights off at 8 AM) with free access to standard laboratory chow and water in their home cages, throughout the experiment. All procedures were approved by the NIDA IRP Animal Care and Use Committee and followed the guidelines outlined in the Guide for the Care and Use of Laboratory Animals 1 . We report the number of rats included in each experiment in the corresponding figure legend.
Behavioral procedures
We used novel context exposure for acute neuronal activation based on previous studies where this procedure has been shown to induce robust IL activation 2 . We also collected brains directly from the homecage to serve as baseline controls. The procedures for each experiment are outlined below.
Fos-mRFP labeling timecourse:
We exposed rats to a novel environment for 1 h, and then placed them back in the homecage for varying periods of time before perfusions and brain extraction. We randomly assigned rats to 6 groups and collected PFA perfused brain tissue at 1, 2, 3, 4, 8, or 24 hours after the start of novel context exposure. We also collected brains from a separate group of rats directly from the homecage to serve as baseline controls. During this experiment, we discovered a copy number discrepancy between breeding sublines, and 16 subjects were excluded from the experiment through copy number determination with ddPCR.
XPoSE pipeline demonstration:
We exposed rats to a novel environment for 1 h, and then placed them back in the homecage for 2 h to allow for mRFP expression (peak expression at 3 h). We used isoflurane to induce a light plane of anesthesia prior to rapid decapitation and fresh brain tissue extraction. We also collected brains from a separate group of rats directly from the homecage to serve as baseline controls.
Sample collection for immunohistochemistry (Fos-mRFP timecourse, XPoSE-tag validation)
We anesthetized rats with isoflurane (at varying time periods after behavioral testing, or from homecage) and perfused them transcardially with ~250 ml of 1x PBS at pH 7.4 (PBS), followed by ~250 ml of 4% paraformaldehyde at pH 7.4 (PFA). We extracted brains and post-fixed for an additional 1–4 h in PFA before transferring them to 30% sucrose for 48 h at 4 °C. We froze sucrose equilibrated brains on dry ice and stored them at −80 °C prior to sectioning.
Immunohistochemistry (Fos-mRFP timecourse, XPoSE-tag validation)
We used a cryostat (Leica, Model CM3050 S) to collect coronal sections (40 μm) containing infralimbic cortex into PBS, and stored them at 4 °C until further processing. We rinsed free-floating sections first in PBS with 0.5% Tween20 and 10 μg/ml heparin (wash buffer, 3 × 10 min), incubated them in PBS with 0.5% TritonX-100, 20% DMSO, and 23 mg/ml glycine for 3 h at 37 °C (permeabilization buffer), and then in PBS with 0.5% TritonX-100, 10% DMSO, and 6% normal donkey serum (NDS) for 3 h at 37 °C (blocking buffer) prior to antibody labeling. We diluted primary (1°) antibodies (Fos: anti-Phospho-c-Fos (Ser32) (D82C12) XP ® Rabbit mAb, #5348, Cell Signaling Technology, RRID: AB_10557109, 1:2500; mRFP: anti-DSRed mouse mAb, sc-390909, Santa Cruz Biotechnology, RRID: AB_2801575, 1:1000; Nucleoporin: anti-Nucleoporin 62 Mouse mAb, 610497, BD Biosciences, RRID: AB_397863, 1:500) in PBS with 0.5% Tween20, 5% DMSO, 3% NDS, and 10 μg/ml heparin (1° Ab buffer) and incubated sections in this solution overnight at 37 °C. Following, 1° antibody labeling, we rinsed sections in wash buffer (3 × 10 min) and then incubated in secondary (2°) antibodies (Fos: Alexa Fluor ® 647 Donkey Anti-Rabbit IgG (H+L), Jackson ImmunoResearch Labs, 711–605-152, RRID: AB_2492288, 1:250; mRFP: Alexa Fluor ® 488 Donkey Anti-Mouse IgG (H+L), Jackson ImmunoResearch Labs, 715–545-150, RRID: AB_2340846, 1:250; Nucleoporin: Alexa Fluor ® 647 Donkey Anti-Mouse IgG (H+L), Jackson ImmunoResearch Labs, 715–605-150, RRID: AB_2340862, 1:250) diluted in PBS with 0.5% Tween20, 3% NDS, and 10 μg/ml heparin overnight at 37 °C. Following, 2° antibody labeling, we rinsed sections again in wash buffer (3 × 10 min), mounted them onto gelatin-coated slides, allowed them to partially dry. We then coverslipped the slides using MOWIOL or Vectashield Vibrance mounting medium with DAPI nuclear stain and allowed them to hard-set overnight prior to imaging on a confocal microscope (Nikon C2, or Olympus FLUOVIEW FV3000).
Confocal imaging (Nucleoporin validation)
We acquired confocal images of infralimbic cortex (AP + 3.0 to AP +3.2, relative to Bregma) on an Olympus FLUOVIEW FV3000 confocal microscope using a 20x/0.75 NA air objective and 2x area zoom. The imaged field of view was 318.2 × 318.2 μm 2 (3.2181 pixels/μm) with 8.0-μs dwell time and 140 μm pinhole. For DAPI fluorescence, we excited tissue with 0.9% laser power at 405 nm and collected emitted fluorescence from 430 to 470 nm with a detection PMT voltage of 530. For Nup62 fluorescence, we excited tissue with 0.5% laser power at 640 nm and collected emitted fluorescence from 650 to 750 nm with a detection PMT voltage of 500. We imaged the two channels sequentially (order: 405, then 640) to minimize overlap and acquired 4 images (2 hemispheres × 2 sections) per animal.
Confocal imaging (Fos-mRFP timecourse)
We acquired confocal images of infralimbic cortex (AP + 3.0 to AP +3.2, relative to Bregma) on a Nikon C2 confocal microscope using a 20x/0.75 NA air objective. The imaged field of view was 642.62 × 642.62 μm 2 (1.5935 pixels/μm) with a 5.3-μs dwell time and 30 μm pinhole. For mRFP fluorescence, we excited tissue with 2% laser power at 488 nm and collected emitted fluorescence from 500 to 550 nm with a detection gain of 80 and an offset of −1. For Fos fluorescence, we excited tissue with 2% laser power at 640 nm and collected emitted fluorescence from 670 to 1000 nm with a detection gain of 105 and an offset of 5. We imaged the two channels sequentially (order: 488, then 647) to minimize overlap and acquired 4 images (2 hemispheres × 2 sections) per animal for analysis.
Sample collection and storage for XPoSE pipeline
We snap froze fresh brains in pre-chilled isopentane at −80 °C for 15 seconds, wrapped the frozen brain samples in labeled aluminum foil wrappers, and submerged them under dry ice for short term storage. We used a −80 °C freezer for long term frozen brain sample storage.
IL microdissection
We used a cryostat (Leica CM 3050S) to section the brains at −20 °C. First, we mounted frozen brains (−80 °C) onto cryostat pedestals and allowed them to equilibrate to −20 °C prior to sectioning. We cut 300 μm coronal sections containing IL (AP +2.8 to AP +3.8), micro dissected IL tissue from both hemispheres, and stored tissue pieces in nuclease free 1.5 mL polypropylene microcentrifuge tubes (Eppendorf) at −80 °C until further processing.
Nuclei isolation and XPoSE-tag labeling
We used established detergent-mechanical cell lysis protocols 3 for single nuclei dissociation from frozen tissue punches with some modifications as described below. Briefly, we transferred frozen tissue punches to a prechilled Dounce homogenizer tube and added ice cold detergent lysis buffer (0.32 M sucrose, 10 mM HEPES pH 8.0, 5 mM CaCl 2 , 3 mM MgAc, 0.1 mM ETDA, 1 mM DTT, 0.1% Triton X-100). We lysed tissue and released nuclei (10 strokes of pestle A, then 10 strokes of pestle B). We diluted the lysate in chilled low sucrose buffer (0.32 M sucrose, 10 mM HEPES [pH 8.0], 5 mM CaCl2, 3 mM MgAc, 0.1 mM ETDA, 1 mM DTT) then filtered the suspension through a 40 μm strainer to remove debris, centrifuged the sample at 3,200 × g for 10 min at 4 °C, and poured off the supernatant. Next, we resuspended the pellet from each sample in 750 mL of chilled resuspension buffer (1X PBS, 0.4 mg/mL BSA, 0.2 U/μL RNAse Inhibitor [Biosearch Technologies, Cat. 30281]) containing one of 8 unique XPoSE-tags (Nucleoporin 62 antibody conjugated to R718 fluorescent dye and one of eight distinct oligo-based Sample Tags (ST) to uniquely barcode nuclei from each sample, custom reagent, BD Biosciences, 1:2000). For verifying the neuronal nuclei gate, we added both the XPoSE-tag and neuronal nuclear marker NeuN (anti-NeuN Mouse mAb clone A60, MAB377X, Millipore Sigma, RRID: AB_2149209, 1:500) We incubated the samples with antibody for 15 minutes at 4 °C on a rotating mixer, transferred to a 5 mL polystyrene round-bottom tube (Falcon) containing 3.5 mL chilled resuspension buffer, centrifuged at 250 × g for 10 min at 4 °C, and aspirated 4 mL supernatant containing residual XPoSE-tag. We resuspended the pellet containing XPoSE-tag labelled nuclei in 1–2 mL chilled resuspension buffer and passed through a 40 μm filter prior to fluorescence activated nuclei sorting (FANS).
Fluorescence activated nuclei sorting (FANS)
We used FANS to isolate XPoSE-tag labelled (Nup62+) neuronal nuclei, and to enrich for Nup62+/mRFP+ nuclei in our samples. We used a BD FACS Melody sorter equipped with 3 excitation lasers (488 nm, 561 nm and 640 nm) and 9 emission filters for FANS. We employed a sequential gating strategy similar to previous studies (Hope lab). Nuclei formed a distinct cluster that separated from debris and was gated in forward scatter area vs. side scatter area view. For XPoSE-tag (Nup62-BDR718) detection, we excited samples with 640 nm laser and collected emitted fluorescence from 690 to 750 nm with a detection PMT voltage of 605, gating the larger Nup62 positive population (neurons). For native mRFP detection, we excited samples with 561 nm laser and collected emitted fluorescence from 595 to 631 nm with a detection PMT voltage of 603. We used mRFP rats from the Homecage group to define a threshold gate for active (mRFP+) nuclei. We used these gates to sort active (mRFP+) and non-active (mRFP−) neuronal nuclei from each Novel context group rat into separate 1.5 mL Protein Lo-Bind Eppendorf tubes containing 750 μL of chilled resuspension buffer (sorting block chilled with recirculating chiller throughout sorting). To calculate mRFP percentages, we recorded 2,000 events from each sample before initializing the sort. We collected on average 3400 active (mRFP+) and 7500 inactive (mRFP−) neuronal nuclei per rat in the novel context group (n = 4) and counterbalanced the collected population between tubes so all experimental groups were represented in each capture. We collected 7500 neuronal nuclei per tube from each Homecage group rat (n = 4) to generate the IL cell-type atlas and serve as baseline controls. We processed all 8 rats on the same day and collected nuclei into the same 2 × 1.5 mL Eppendorf tubes - thus each tube contained up to 7500 neuronal nuclei (mRFP+, mRFP−, or all neuronal nuclei, counterbalanced) from each rat at the end of nuclei collection and was processed using a BD Rhapsody separate cartridge.
Nuclei capture and barcoding
We used BD Rhapsody Single cell analysis platform for nuclei capture and followed manufacturer’s suggested protocols for nuclei staining, cartridge preparation, sample loading, bead loading, lysis and reverse transcription steps. We first added Vibrant Dye Cycle Green nuclear dye (Thermo Fisher, Catalog number V35004, 1:1000) to each Eppendorf tube (containing collected nuclei) and incubated the samples for 10 minutes on ice. We diluted samples to 1.5 ml with BD Sample Buffer with RNAse Inhibitor, then centrifuged the tubes at 250 × g for 10 min at 4 °C. We removed excess buffer to a final volume of ~650 μL. We used two cartridges for the experiment - one for each 1.5 mL Eppendorf tube used during sorting. We primed the BD cartridge with absolute ethanol, and wash buffers and then loaded resuspended nuclei. We incubated the loaded cartridge for 30 minutes at 4 °C and then used the BD Rhapsody Scanner to scan the cartridge and estimate nuclei capture efficiency. Next, we loaded barcoded capture beads onto the cartridge and incubated beads with the sample as directed by manufacturer protocol. We then performed washes to remove excess beads and ran a second cartridge scan to determine bead load efficiency. After the scan, we engaged the bottom magnet in ‘Lysis’ mode to immobilize capture beads, added lysis buffer, and incubated for 2 min. We then switched the magnet to ‘Retrieval’ mode, followed manufacture protocols to collect beads with captured polyadenylated targets, and performed reverse transcription and an exonuclease reaction using the version 1 3’ kit (BD Biosciences, cat. 633733, 633773).
Single nucleus RNA sequencing, demultiplexing and genome alignment
We followed manufacturer protocols to generate single-cell whole transcriptome mRNA, and Sample Tag libraries (BD Biosciences, cat. 633801) for sequencing on Illumina ® sequencers. We followed manufacturer protocols to measure library concentration with Qubit 4 fluorometer (ThermoFisher, cat. Q33230) and length distribution with a Bioanalyzer 2100 (Agilent, cat. 5067–4626). We processed both cartridges in parallel and generated separate libraries that were indexed and pooled for sequencing. We sequenced whole transcriptome libraries at 45,000 reads per nucleus and sample tag libraries at 1,000 reads per nucleus on an Illumina Novaseq S4 lane (PE 150 bp). We trimmed sequences to 75 bp then used the BD Rhapsody WTA Analysis Pipeline (v1.11 rev 8, Single-Cell Multiplex Kit - Mouse) on Seven Bridges Genomics platform to demultiplex raw sequencing reads and align to the reference rat genome (Rn7.2). We included introns and exons during alignment and generated filtered count matrices with recursive substitution error correction (RSEC) based on a liftover from mm10 mouse genome annotation to Rn7.2 rat genome 4 . Sample tag UMIs and sample tag calls were generated for each nuclei ID.
snRNAseq analysis
All analysis scripts to reproduce the figures are on our XPoSE repository on Github ( https://github.com/ksavell/XPoSE ). Rhapsody count matrices with RSEC for each cartridge were analyzed with Seurat (v4.3) in R (v4.3.0). Seurat objects were created from count matrices, and the sample tag ID and sample tag counts were assigned as metadata from the corresponding sample tag outputs. Experimental metadata for each nucleus was assigned based on the cartridge/XPoSE-tag combination, and cells with multiple XPoSE sample tags or fewer than 50 gene features were removed.
Of the 21,871 nuclei that passed QC filtering, we normalized and scaled the data using the 2000 most highly variable features before applying dimension reduction. We screened the cells for known markers for neurons and non-neuronal populations ( Snap25 [pan-neuronal] , Slc17a7 [excitatory neuron] , Gad1 [inhibitory neuron] , Mbp [oligodendrocyte] , Gja1 [astrocyte] , Col5a3 [microglia]) and removed 553 (1.7% of the total) nuclei as non-neuronal contamination. We then split excitatory and inhibitory neuron populations into two objects to process in parallel. After subsetting, we reclustered both excitatory and inhibitory data and generated the top 10 marker genes for each cluster using FindMarkers function in Seurat to visualize as a heatmap in Supplemental Figure 1 . We manually annotated the clusters based on marker expression using the Allen Cell Type Database nomenclature 5 .
For differential expression analysis, Libra (v1.0) 6 was used to sum gene counts for all nuclei for each rat per population to create a pseudobulked gene expression matrix for each cluster with > 75 nuclei per group in the comparison. Any genes with counts < 5 were removed for analysis. DESeq2 (v1.40) 7 was used to calculate DEGs for each cluster and comparison using false discovery rate corrected p-values. All DEGs for each comparison are listed in Supplemental Table 2 . Normalized counts were extracted from the DESeq2 object to calculate individual subject fold change for representative genes. ComplexUpSet was used to create a combination matrix (using both distinct and intersect modes) of DEG list comparisons. While we included both female and male subjects in accordance with sex as a biological variable policy, the current experiment is not appropriately powered to find differential gene expression by sex.
Statistical Analysis.
Statistical and graphical analysis were performed using R and GraphPad Prism (version 9.5.1). We tested the data for sphericity and homogeneity of variance when appropriate. When the sphericity assumption was not met, we adjusted the degrees of freedom using the Greenhouse–Geisser correction. Sample retention proportions were tested with 2-way repeated measures (RM) ANOVA with Bonferonni post-hoc. mRFP percentages/enrichment were tested using RM 1-way ANOVA with Tukey post-hoc. Proportions for excitatory and inhibitory clusters were tested using multiple paired t- tests in a between (Non-active vs. Homecage or Positive vs. Homecage) or within (Active vs. Non-active) subject design and corrected for multiple comparisons (FDR < 0.05) using p.adjust() in R. The mRFP labeling timecourse was tested using a 1-way ANOVA with Bonferonni post-hoc. Because our analyses yielded multiple main effects and interactions, we report only those that are critical for data interpretation. See Table S1 for detailed listing of all statistical outputs. | Author contributions
K.E.S., R.M. and B.T.H. designed the experiments. K.E.S., and R.M. conceptualized the XPoSE-seq approach. J.C.M. generated XPoSE-tag reagent. K.E.S. and R.G.P. validated the XPoSE-tag reagent. R.M., R.G.P., and O.R.D., and M.B.B., ran behavioral experiments and collected samples for XPoSE-seq. K.E.S., R.G.P., R.M., C.I.C., and J.W.A., ran XPoSE-seq experiments. K.E.S., R.M., S.J.W., E.V.L., J.H.C., and R.G.P. performed Fos-mRFP rat validation experiments. K.E.S., P.S., M.K.S., R.M., K.D.W., D.J.T., T.L.M., and R.G.P., established analysis pipelines and analyzed the data. P.S., D.J.T., and K.E.S. developed R Shiny web application. K.E.S., R.M., and B.T.H. wrote the paper. All authors reviewed and approved the final version prior to submission.
Single nucleus RNA-sequencing is critical in deciphering tissue heterogeneity and identifying rare populations. However, current high throughput techniques are not optimized for rare target populations and require tradeoffs in design due to feasibility. We provide a novel snRNA pipeline, Muliple X ed Po pulation S election and E nrichment snRNA-sequencing ( XPoSE-seq ), to enable targeted snRNA-seq experiments and in-depth transcriptomic characterization of rare target populations while retaining individual sample identity. | Single cell and nucleus RNA-sequencing (sc/sn-RNAseq) approaches disentangle the heterogeneity of complex tissues and aid in the identification of rare populations otherwise masked by conventional bulk-tissue procedures. While current pipelines effectively characterize common populations, there are multiple challenges to perform snRNA-seq on rare target populations 1 . High-throughput technologies isolate large nuclei numbers, but without enrichment, rare target populations are masked by majority non-target populations. When enrichment is used to increase target population frequency, multiple biological replicates are pooled in a single capture, sacrificing critical individual subject information. However, recent work highlights the importance of true biological replicates to properly account for between-replicate variation and reduce false discoveries in single cell/nucleus analyses 2 .To address these issues, we developed XPoSE-seq, which combines flow cytometry-based rare population enrichment with an antibody-based multiplexing snRNA-seq strategy to optimize cost, throughput, and sample identity retention ( Figure 1A ). The key component in XPoSE-seq is a novel reagent (XPoSE-tag) that leverages an antibody against the ubiquitous nucleoporin complex protein (Nup62), found in nuclei from all cell types, with dual conjugations of 1) a far-red dye (R718) to identify Nup62-labeled nuclei in flow cytometry and 2) one of eight distinct oligo-based Sample Tags (ST) to uniquely barcode nuclei from each sample. XPoSE-tag can be used in conjunction with other fluorescent labels to select/enrich similar proportions of target populations from each sample before mixing (multiplexing) samples prior to nuclei capture on the BD Rhapsody microwell-based system 3 , 4 .
We verified that XPoSE-tag reliably labeled nuclei (identified using DAPI fluorescence) in the infralimbic (IL) subregion of rat medial prefrontal cortex ( Figure 1B ). Flow cytometry of IL nuclei preparations ( Figure 1C , left) revealed two distinct Nup62-positive populations within the nuclei gate ( Figure 1C , center). After co-staining for the neuronal nuclear protein NeuN, we found that larger-sized nuclei were NeuN-positive (i.e., neurons) while smaller-sized nuclei were NeuN-negative (non-neurons). This size difference between neurons and non-neurons agrees with previous studies 5 , and allows for XPoSE-tag based selection of either subpopulation via flow cytometry ( Figure 1C , right).
As a proof of principle study, we applied XPoSE-seq to enrich and profile the transcriptome of behaviorally active, Fos-positive neurons, a rare population that typically makes up <5% of neurons in a brain region 6 . We labeled behaviorally active neurons using Fos-mRFP transgenic rats 7 , where expression of the fluorescent protein mRFP is temporally induced in neurons that express the endogenous activity marker Fos ( Figure 1D , labeling timecourse Figure S1 ). Nuclei preps were prepared from samples taken 2 h after 1 h of novel context (NC) exploration (or from homecage controls). Using fluorescence-activated nuclei sorting (FANS), we sorted Active (mRFP-positive) and Non-active (mRFP-negative) neuronal nuclei from four rats in the novel context group ( Figure 1E – F ). We sorted neurons agnostic of mRFP signal from four homecage (HC) rats to serve as baseline control. We counterbalanced populations from the 8 rats between two nucleus captures, multiplexing 8 populations per capture using the BD Rhapsody system. After library preparation, sequencing, and alignment/demultiplexing, we confirmed robust XPoSE-tag labeling, with ST reads corresponding to expected sample assignment ( Figure 1G ). There were no differences in population retention across the three collected populations ( Figure 1H ), indicating that the pipeline can be used to enrich for and retain both abundant (Control or Non-active) and rare (Active) populations from individual subjects. While the objective was to collect a 1:1 ratio of Active to Non-active neurons in NC subjects, we were ultimately limited by the size of the Active population. Nevertheless, the proportion of Active neurons from NC subjects was enriched approximately 9-fold ( Figure 1I ) compared to the original sample ( Table S1 for statistical outputs).
Dimensionality reduction of sorted nuclei expression counts confirmed majority neuronal clusters and negligible glia contamination (1.7%). Cluster transcriptomes matched previously identified excitatory (Slc17a7-positive) and inhibitory (Gad1-positive) cortical neuron classes ( Figure 2A & Figure S2 , public web browser at https://nidairpneas.shinyapps.io/xpose ) with excellent reproducibility in cell-type distribution between cartridge captures ( Figure S3 ). We examined the cell-type distribution of each population ( Figure 2B ) and found that NC Active nuclei were enriched in IT-L5/6 and Sst clusters and underrepresented in CT-L6, NP-L5/6, Sst Chodl, Vip , and Lamp5 neuronal nuclei clusters ( Figure 2C ). As expected, there were no differences in cell-type proportions between abundant HC and NC Non-active nuclei samples (Statistical outputs in Table S1 ).
A key benefit of XPoSE-seq is that individual subject metadata is retained for each population, allowing differential expression (DE) analyses that account for biological replicates and associated variation. We performed cluster-specific DE analysis on pseudo-bulked counts from each biological replicate ( n = 4 / group), excluding clusters with fewer than 75 nuclei / population. Transcriptional changes were found overwhelmingly in the Active population ( Figure 2D , Table S2 ), with differentially expressed genes comprising many known activity-dependent genes ( Figure 2E ) 6 , 8 – 10 . Next, we investigated shared or unique transcriptional changes across cell types ( Figure 2F ). Surprisingly, while several genes showed enrichment across multiple active clusters, only a single gene, Vgf , was upregulated in all active clusters. Additionally, while Sst was overrepresented in the Active population, there were no other transcriptional changes within this cell type. Finally, in line with and extending previous bulk-sample based studies 9 , 11 , activity-dependent transcriptional changes were almost non-existent in the Non-active population compared to Homecage nuclei, irrespective of cell-type cluster.
In the past decade, single-cell technologies have illuminated diverse cell types and cell states within tissues across species, discovering novel and rare populations via high-throughput transcriptomic analyses. XPoSE-seq complements these efforts, pinpointing rare-population-specific transcriptomic signatures and merging them with sample-specific metadata like sex, treatment, or behavior. XPoSE-tag, an antibody-based 12 multiplexing extension, captures rare populations from multiple samples in user-defined proportions. In this proof-of-concept study, XPoSE-seq effectively enriched and maintained sample-specificity for neurons activated in a novel context. Active neurons were distributed across IL cell types and had both shared and cell-type specific gene expression patterns. This underscores XPoSE-seq’s utility in exposing cell-type-specific alterations within rare populations.
Many studies opt to combine all biological replicates of an experimental group into a single capture step due to cost and feasibility. However, this approach risks overrepresentation bias in terms of collected cell clusters or differentially expressed genes (DEGs). Furthermore, it limits differential expression (DE) analysis, as many studies equate biological replicates with the number of cells/nuclei, inaccurately inflating analysis power. XPoSE-seq offers a solution by enabling individual subjects to serve as biological replicates, allowing precise assessment of each sample’s contribution to specific analyses. Moreover, it enables precise control over population proportions prior to nucleus capture, essential for mitigating unequal sample sizes and resulting issues like unequal variance and reduced statistical power. XPoSE-seq is well positioned to support targeted multi-sample transcriptomic profiling of rare populations labelled using transgenic reporter lines, cell-type and circuit specific viral tools, or emerging enhancer-driven virus-based approaches 13 , 14 .
In summary, XPoSE-seq combines the benefits of antibody-based sample-multiplexing and FACS-based population enrichment with high-throughput snRNAseq technologies to enable cost-effective transcriptomic characterization of rare target populations while retaining sample identity information. In addition, it provides fine control over proportions of nuclei collected per population and sample and allows for more robust statistical testing approaches. The XPoSE-seq approach and experimental strategy employed here is broadly applicable to investigation of other target populations, such as precious samples, rare cell-types, altered cell states by treatment or disease, or combinations thereof.
Supplementary Material | Acknowledgements
We thank members of Hope lab for their support and insight during all stages of this study. We thank Ueta lab (University of Occupational and Environmental Health, Kitakysushu, Japan) for providing the Fos-mRFP transgenic line. We thank Dr. Francois Vautier and the NIDA IRP Transgenic Breeding staff for management of the Fos-mRFP transgenic lines, and we thank Dr. Christopher Richie (NIDA IRP Genetic Engineering and Viral Vector Core) and Madeline Merriman for confirming genotype of Fos-mRFP transgenic rat sublines. We thank the NIH Intramural Sequencing Core for assistance with sequencing. We thank Dr. BaDoi Phan for generating the mm10 annotation liftoff to Rn7.2 used during alignment.
Competing Interests
J.C.M., M.K.S., J.W.A, and C.I.C. were employees of BD Biosciences. Manuscript approval by BD Biosciences was not required, and BD Biosciences had no influence regarding data analysis, data interpretation, and discussion. All other authors declare that they do not have any conflicts of interest (financial or otherwise) related to the text of the paper. This work was supported by the Intramural Research Program of the National Institute on Drug Abuse. K.E.S and R.M. received funding from the NIH Center for Compulsive Behaviors. K.D.W., and O.R.D were supported by the NIDA IRP Scientific Director’s Fellowship for Diversity in Research. P.S., D.T., E.V.L, and J.H.C. were supported by the NIH Summer Research Program, and T.L.M was supported by the NIDA Undergraduate Research Internship Program.
Data, Materials, and Code Availability
All relevant data that support the findings of this study are available by request from the corresponding author (B.T.H.). Sequencing data will be deposited in Gene Expression Omnibus at time of publication. Custom R analysis scripts to reproduce the figures are available on the XPoSE Github repository ( https://github.com/ksavell/XPoSE ). A public web browser to explore the dataset is located at https://nidairpneas.shinyapps.io/xpose .
Methods References | CC BY | no | 2024-01-16 23:49:19 | bioRxiv. 2023 Sep 29;:2023.09.27.559834 | oa_package/06/98/PMC10775316.tar.gz |
||||
PMC10775317 | 38196583 | INTRODUCTION
Prion disease features striking biomarker signatures 1 – 4 , but limited data exist on pre-symptomatic changes 5 – 7 . Mirroring disease duration 8 , prodomal change in genetic prion disease appears brief, preceding symptoms by at most 1–4 years 6 , 7 . Prion “seeds” in CSF have been detected by real-time quaking induced conversion (RT-QuIC) in pre-symptomatic individuals 5 , 7 , but prognostic value remains unclear. Here, we report fluid biomarker trajectories associated with 4 disease onsets over 6 years in a longitudinal natural history of genetic prion disease mutation carriers. | METHODS
Study participants.
This previously described 5 cohort study includes asymptomatic individuals with pathogenic PRNP mutations; individuals at risk for same; and controls ( Table 1 ; Figure S1 ). Individuals with contraindication to lumbar puncture were excluded. Each visit included CSF and plasma collection, a medical history and physical, and a battery of cognitive, psychiatric, and motor tests and inventories. Individuals were invited to complete a baseline visit, a short-term repeat 2–4 months later (pre-2020), and approximately yearly visits thereafter. Data presented here were collected July 2017 to February 2023 and include data previously reported 5 , 9 . All participants were cognitively normal and provided written informed consent. This study was approved by the Mass General Brigham Institutional Review Board (2017P000214). Assay validation utilized samples from MIND Tissue Bank (2015P000221).
Biomarker assays.
Biomarker assays utilized were: RT-QuIC (IQ-CSF protocol) 10 , PrP ELISA 9 , Simoa (Quanterix) GFAP, and Ella (Bio-Techne) NfL, T-tau ( Figure S3 ), and β-syn ( Figure S4 ), see Supplementary Methods .
Statistical analysis.
Biomarker relationships with age and mutation status were assessed by log-linear regression; curve fits shown in figures are the separate best fits for mutation carriers and for controls, while P values are for the effect of carrier status in a combined model: lm(log(value) ~ age + carrier). Our study does not disclose biomarker values or PRNP mutation status to participants, yet a combination of age and the number and spacing of visits completed could uniquely identify some individuals, presenting a selfidentification risk. To mitigate this risk, for controls and non-converting carriers in data visualizations, ages were obfuscated by addition of a normally distributed random variable with mean of 0 and standard deviation of ±3 years, and visit spacing intervals were obfuscated by multiplication by a normally distributed random variable with mean 1 and standard deviation ±25%, capped at a maximum increase of +25% to avoid visually exaggerating the study’s duration. True ages and true visit intervals for all participants are used in all descriptive statistics and statistical models and true ages and true visit intervals are shown in plots for the individuals who converted to active disease. For details of RT-QuIC analysis see Supplementary Methods . P values <0.05 were considered nominally significant. Analyses were conducted in R 4.2.0. Source code, summary statistics for all participants, and individual biomarker values for converting participants are available at https://github.com/ericminikel/mgh_prnp_freeze2 | RESULTS
Of 41 carriers ( Table 1 ), four converted to active disease (N=3 E200K, N=1 P102L). 6 RT-QuIC positives ( Figure 1A ) belonged to 3 E200K individuals who converted and died of prion disease. 2 PRNP codon 129 heterozygotes (M/V) were RT-QuIC positive at first sample (2.5 and 3.1 years before onset); prion titer in CSF did not appreciably rise thereafter ( Figure 1B ). One homozygote (V/V) became RT-QuIC positive on study and became symptomatic 1 year later.
Plasma GFAP, a marker of reactive astrogliosis, was high relative to age in 2/4 converters, but change from individual baseline was unremarkable compared to controls and non-converters ( Figure 1D ). Plasma NfL appeared high and increased in all 4 converters, but not outside the range of non-converters and controls ( Figure 1E ). CSF NfL, CSF t-tau, and CSF beta-synuclein were each elevated in 2/4 converters and normal in 2/4 ( Figure 1F – H ); different converting individuals were high for different markers. | DISCUSSION
Here we describe fluid biomarker profiles in a longitudinal cohort of genetic prion disease mutation carriers, including 4 individuals who converted to active disease. As before 5 – 7 , at any given time, cross-sectionally, most carriers of genetic prion disease mutations do not have any detectable molecular sign of disease. Our data support the hypothesis that CSF prion seeding activity as assayed by RT-QuIC may represent the first detectable change in E200K carriers. However, we did not detect seeding activity in the CSF of a P102L converter, consistent with RT-QuIC’s lower sensitivity for most non-E200K genetic subtypes 1 , 11 . Though our sample is small, our data suggest that PRNP codon 129 genotype may modify the duration of CSF RT-QuIC positivity before onset in E200K individuals; longer prodromal positivity in M/V heterozygotes would mirror their longer disease duration after onset 12 .
Soluble PrP in CSF is reduced in symptomatic prion disease patients, presumably as a result of a disease sink process 13 – 16 , and yet pharmacologic lowering of CSF PrP may be important as a drug activity biomarker for trials of PrP-lowering drugs, and has been proposed as a surrogate endpoint in prevention trials 17 . Our data suggest that CSF PrP does not begin to decline prior to symptom onset, even in the presence of RT-QuIC positivity, suggesting its use in asymptomatic individuals will not be confounded.
Neuronal damage and neuroinflammation markers rise with age and may vary between individuals. Neither when normalized to age nor to individual baseline did any of these markers consistently provide distinctive signal in all 4 of our converting individuals relative to non-converters and controls. Thus, while these markers may be useful as an adjunct, none is likely to provide the prognostic specificity of RT-QuIC. RT-QuIC, meanwhile, may offer just 1 year of advance signal in some E200K cases, and currently faces limited sensitivity to other subtypes. Assay improvement, biomarker discovery, and continued sample accrual will be vital to identifying additional prognostic markers, particularly for non-E200K subtypes. At any given time, most carriers appear non-prodromal, thus, in this rare disease, prodromal individuals are unlikely to be identified in sufficient numbers to power clinical trials. Instead, primary prevention trials with inclusion based on genotype and CSF PrP as primary endpoint may be necessary 17 , and would honor the outsize benefit of early treatment observed in animal models 18 . Treatment of prodromal individuals could feature as a supportive arm and/or randomization off-ramp for carriers who develop a prodromal signature during a trial.
Limitations.
Four symptom onsets is a small absolute number from which to draw conclusions. Reflecting study enrollment and overall mutation prevalence, our observed onsets are skewed towards E200K. Some annual visits were missed due to COVID-19. We did not collect emerging sample types such as nasal brushings 19 , urine 20 , or tears 21 . Additional pre-symptomatic natural history work across multiple sites 7 , 22 , 23 will be required to build confidence in our observations. | CONCLUSIONS
In E200K carriers, RT-QuIC seeding activity in CSF can precede symptom onset by 1–3 years, perhaps depending on PRNP codon 129 genotype. CSF and plasma markers of neurodegeneration and neuroinflammation do not unambiguously identify imminent converters. CSF PrP levels are longitudinally stable over time in all participants even following RT-QuIC positivity. | AUTHOR CONTRIBUTIONS
Dr. Vallabh had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Concept and design: SMV, SEA, EVM. Patient visits and sample accrual: SEA, AJM, SWA, ACK, AMF, AB, KDM, GD. Sample analysis: MAM, SWA, PKW, BLH, AB, BAT. Supervision: SMV, AR, JG, AJM, EVM, SEA. Drafting of the manuscript: SV. Critical review of the manuscript: all authors. Statistical analysis: EVM. Obtained funding: SEA, SMV, EVM.
Importance.
Genetic prion disease is a universally fatal and rapidly progressive neurodegenerative disease for which genetically targeted therapies are currently under development. Preclinical proofs of concept indicate that treatment before symptoms will offer outsize benefit. Though early treatment paradigms will be informed by the longitudinal biomarker trajectory of mutation carriers, to date limited cases have been molecularly tracked from the presymptomatic phase through symptomatic onset.
Objective.
To longitudinally characterize disease-relevant cerebrospinal fluid (CSF) and plasma biomarkers in individuals at risk for genetic prion disease up to disease conversion, alongside non-converters and healthy controls.
Design, setting, and participants.
This single-center longitudinal cohort study has followed 41 PRNP mutation carriers and 21 controls for up to 6 years. Participants spanned a range of known pathogenic PRNP variants; all subjects were asymptomatic at first visit and returned roughly annually. Four at-risk individuals experienced prion disease onset during the study.
Main outcomes and measures.
RT-QuIC prion seeding activity, prion protein (PrP), neurofilament light chain (NfL) total tau (t-tau), and beta synuclein were measured in CSF. Glial fibrillary acidic protein (GFAP) and NfL were measured in plasma.
Results.
We observed RT-QuIC seeding activity in the CSF of three E200K carriers prior to symptom onset and death, while the CSF of one P102L carrier remained RT-QuIC negative through symptom conversion. The prodromal window of RT-QuIC positivity was one year long in an E200K individual homozygous (V/V) at PRNP codon 129 and was longer than two years in two codon 129 heterozygotes (M/V). Other neurodegenerative and neuroinflammatory markers gave less consistent signal prior to symptom onset, whether analyzed relative to age or individual baseline. CSF PrP was longitudinally stable (mean CV 10%) across all individuals over up to 6 years, including at RT-QuIC positive timepoints.
Conclusion and relevance.
In this study, we demonstrate that at least for the E200K mutation, CSF prion seeding activity may represent the earliest detectable prodromal sign, and that its prognostic value may be modified by codon 129 genotype. Neuronal damage and neuroinflammation markers show limited sensitivity in the prodromal phase. CSF PrP levels remain stable even in the presence of RT-QuIC seeding activity. | Supplementary Material | FUNDING
This study was supported by the Broad Institute (BroadIgnite Accelerator), Ionis Pharmaceuticals, Prion Alliance, CJD Foundation, and the National Institutes of Health (R21 TR003040, R01 NS125255).
Role of the Funder/Sponsor:
The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. | CC BY | no | 2024-01-16 23:49:21 | medRxiv. 2023 Dec 18;:2023.12.18.23300042 | oa_package/b9/e9/PMC10775317.tar.gz |
PMC10775320 | 38196653 | Introduction
Understanding the relationship between human mobility and the spread of pathogens is crucial for mitigating outbreak events. This has been demonstrated for acute respiratory infections, such as influenza and daily commuter patterns [ 1 ] , for measles and large-scale seasonal movements [ 2 ] , and for HIV with long term changes in connectivity [ 3 ] . Understanding this relationship can highlight spatial and temporal risks of transmission, which can inform public health preparedness, surveillance, and outbreak response efforts.
The zoonotic origins and high case fatality rate in humans (40–90%) of Ebola virus (EBOV, member of species Orthoebolavirus zairense , formerly known as Zaire ebolavirus) are particularly concerning as a disease-causing agent [ 4 , 5 ] . Possible reservoir hosts include fruit bats while non-human primates and some antelope are known intermediate hosts [ 4 ] . Initial symptoms of Ebola virus disease (EVD) include headaches, high fever, and muscle aches. Disease progression can lead to more severe symptoms, such as internal bleeding and organ failure, which can result in death [ 4 , 6 ] .
The first documented case of EVD caused by the EBOV occurred in Yambuku, DRC, 1976. From 1976–2020, a total of 18 human EVD spillovers were documented, all within Central and West Africa [ 7 ] . A spillover is defined as a cross-species transmission event and often refers to pathogen transmission from wildlife to humans. Many EVD outbreaks are triggered by a single spillover event followed by human-to-human transmission between close contacts. Most spillover events have been traced to rural and heavily forested areas, where human-wildlife interactions could have occurred. Index cases are often linked to hunting activities, or individuals who work in forests or areas of land conversion (e.g., logging, mining, etc.) and may have had contact with bats or non-human primates [ 8 , 9 ] . Infected humans can transmit the virus directly to other humans with the greatest risk to close contacts, including caregivers, who may be family members or medical professionals [ 10 ] .
To identify possible links between human movement and the spread of EBOV, we measured movement and transportation networks across time and compared them to concurrent EVD outbreaks. Specifically, we analyzed characteristics of the road and river networks surrounding the spillover locations near the time of each spillover event for all 18 documented EVD outbreaks from 1976–2020. We quantified the connectivity of spillover locations to surrounding areas. Roads vary by condition and surface (e.g., unpaved roads) and enable transportation by vehicles, animals, and on foot [ 11 , 12 ] . Rivers are also commonly used for transit, particularly in Central Africa, and link large cities as well as smaller settlements [ 12 ] .
We first examined metrics for total outbreaks and next focused on the first 100 days of each outbreak to isolate the immediate importance of movement and connectivity. This temporal scope helped remove the downstream effects of transmission and long-term outbreak management. It also allowed us to isolate the early trajectories of two recent EVD outbreaks that were uncharacteristically large in number of cases, spatial extent, and total duration [ 10 ] .
We also examined proxies for the impact of outbreak response readiness and prior experience by comparing the total outbreak size and duration for the first EBOV outbreaks in an area to subsequent outbreaks in the same location. Using the locations of index cases, we defined ‘subsequent outbreaks’ as EBOV spillover events that occurred in close spatial proximity to a previous spillover event, either in the same town or a neighboring town with prior evidence of connectivity due to movement (< 60 km). We identified three sets (two pairs and one triad) of first and subsequent outbreaks and compared their outbreak metrics.
We found that measures of transportation networks were correlated with several metrics used to measure total outbreaks. Measures of transportation networks included total road length, total combined road and river length, and total number of intersections (road-road, road-river, and river-river intersections). However, this relationship was even stronger for the first 100 days of outbreaks; transportation networks surrounding each EBOV spillover event were significantly positively correlated with the number of reported cases in the first 100 days of an outbreak. This suggests that population mobility is a contributing factor in EBOV transmission. We also found that subsequent outbreaks were always smaller than initial outbreaks, suggesting preparedness or response experience may improve outbreak management. With increasing human mobility and population connectivity, emerging zoonoses present a growing threat to populations and global health systems. | Methods
Data sources
We scanned and digitized printed versions of Michelin Road maps of Central and South Africa that were published in 1963, 1969, 1974, 1981, 1989, 2003, 2007, and 2019 and West Africa from 1975, 1989, 1991, 2003, and 2019 ( Table 1 ) [ 13 , 14 ] . We used shapefiles of administrative areas from DIVA-GIS as reference maps for each country [ 15 ] . We included all countries with documented Ebola-Zaire virus spillovers or transmission events from 1976 to 2020: Central African Republic, The Democratic Republic of the Congo (DRC), Gabon, Guinea, Liberia, Republic of the Congo (RC), Sierra Leone, and Uganda ( Fig. 1a ). We used ArcMap version 10.8.1 to georeference and digitize roads and rivers from each scanned map [ 16 ] . Maps were georeferenced using the WGS 1984 geographic coordinate system and projected to the Africa Sinusoidal projected coordinate system. Serial road and river networks from different years were registered together to minimize the errors associated with digitizing. Topology corrections were applied on the networks in GRASS GIS version 7.8.7 [ 17 ] .
We analyzed documented spillover events from the DRC that occurred in 1976, 1977, 1995, 2007, 2008, 2014, 2017, 2018, 2019, and 2020, Gabon in 1994, 1996, 1996(b), and 2001, RC in 2003, 2003(b), and 2005, and Guinea in 2013. We used the map year closest to each spillover event year to perform a detailed analysis on the road and river networks. The greatest time difference between a map year and the corresponding spillover year for an event was 7 years, and the average was 3 years. Table 1 includes spillover event details and the map year that was matched to each spillover event. We excluded outbreaks that were determined by genetic sequence analysis that were caused by transmission from a survivor of an outbreak in years prior; we included only outbreaks that were seeded by spillover events. We also excluded other orthoebolaviruses, which differ in incubation period, transmissibility, clinical presentation, and geographic range.
Data preparation
We measured outbreaks using the total number of cases recorded during the outbreak, the duration of each outbreak in days, and the number of cases in the first 100 days ( Fig. 1b ) for Yambuku [ 18 , 19 ] , Tandala, Mekouka [ 20 ] , Kikwit [ 20 , 21 ] , Mayibout [ 20 ] , Booue [ 20 , 22 ] , Mekambo/Mbomo [ 23 , 24 ] , Mbomo [ 25 , 26 ] , Etoumbi [ 27 ] , Luebo [ 28 , 29 ] , Meliandou [ 30 , 31 ] , Inkanamango [ 32 ] , Likati [ 33 , 34 ] , Bikoro [ 35 , 36 ] , North Kivu [ 37 , 38 ] , and Mbandaka [ 39 ] . We also examined the spatial spread of notified cases within the first 100 days of each outbreak from existing maps and surveillance reports in above cited references.
We performed a sensitivity analysis of correlation between transportation networks and outbreak measures across study areas with each spillover event at the center. We examined square study areas of 50 km × 50 km, 100 km × 100 km, 150 km × 150 km, 200 km × 200 km, and 300 km × 300 km. We assessed correlations between measures of mobility networks and outbreaks at 10 km increments for sensitivity analysis ( Fig. S1 ).
We used estimates of walking distance and speed to determine the study area that would capture movement during the shortest incubation period for EVD. We apply the operational incubation period for EVD during outbreak response, which is 2-to-21-days, although 4–5 day minimum incubation periods are more typical than 2 days [ 40 , 41 ] . An average person is able to walk 5 km/hr along roads [ 42 ] . The estimated travel speed by boat is in the same range as walking speed [ 10 ] . Thus, at 7.5 hours of walking or traveling by boat per day, a person could potentially travel 75 km in 2 days. We concluded that a study area of 75 km around each spillover, or a 150 km × 150 km area centering each spillover, would adequately represent the minimum critical movement early in an epidemic and we focus our results accordingly (see SI for results from remaining study areas). We calculated the total length of the roads and rivers within each study area. Additionally, we quantified the number of total intersections for each study area where an intersection is defined as a point where at least two routes intersect (e.g road-road, road-river, river-river).
Subsequent Outbreaks
We defined subsequent outbreaks as EBOV spillover events that occurred in the same location or within close spatial proximity, no greater than 60 km apart, from a previous spillover event. These locations also exhibited evidence of connectivity, including pathogen transmission between them and direct links along transportation networks [ 10 , 43 ] . We identified three sets of subsequent outbreaks. First, an outbreak in Mekouka, Gabon in December 1994 preceded an outbreak in Mayibout, Gabon in February of 1996. Second, an outbreak in Mbomo, RC in February 2003 preceded both outbreaks in Mbomo in November 2003 and Etoumbi, RC in May 2005. Third, an outbreak in Luebo, DRC in September 2007 preceded an outbreak in Luebo in December 2008 ( Fig. 1a ).
Statistical Analysis
We calculated Pearson correlation values to assess the relationships between outbreak measures and transportation network characteristics. We used linear regressions and multiple regression models to assess the relationship between each outbreak measure and the transportation network characteristics described. We also added the total road networks and the total river networks separately to a multiple regression analysis against the total cases in the first 100 days of each outbreak for the 150 km × 150 km study areas. We reported R-squared and p-values for each analysis. We used R studio, version 2022.07.1+554 for all statistical analyses [ 44 ] . | Results
We mapped each EBOV spillover event ( Fig. 1a ). Between 1976 and 2020, 10 EBOV spillover events occurred in the DRC, 3 in the ROC, 4 in Gabon, and 1 in Guinea. The total number of cases for each outbreak ranged from 1 to 28,652 ( Table 1 ) [ 5 , 7 ] . The Guinea outbreak of 2013 reported 28,652 cases and a duration of 888 days; this outbreak represents the largest number of total cases and the longest duration of any EVD outbreak, while the Tandala outbreak in 1977 reported the fewest total cases (1 case) and the shortest duration (42 days). We also mapped the corresponding estimated geospatial spread for the first 100 days of each outbreak [ 43 ] . The total cases in the first 100 days of each outbreak ranged from 1 to 285; the Kikwit outbreak of 1995 reported the most cases in the first 100 days with 285 cases.
Network characteristics
We measured the lengths of the road and river networks around each reported spillover location from the Central/South Africa and North/West Africa Michelin maps from 1963–2019 and 1975–2019, respectively. Across the 150 km × 150 km study areas in Central Africa, from 1963 to 2019, we observed a maximum increase in road network length of 363 km and a maximum decrease in road network length of 120 km for 17 spillover events. The average road network length change was an increase of 53 km (see Table S1 for the net changes in road lengths in a 150 km × 150 km study area surrounding each spillover). The change in road network length from 1975 to 2019 in the 150 km × 150 km study area surrounding the West Africa spillover event was an increase of 133 km. River lengths were largely stable over time within each study area.
We calculated the ratio of river network length to road network length in a 150 km × 150 km study area surrounding each spillover ( Fig. 2 ). The study area surrounding the 1994 outbreak in Mekouka, Gabon contained the highest ratio of river to road length, at 853 km of rivers to 61 km of roads, while the study area around the 1977 spillover event in Tandala contained the lowest ratio of rivers to roads, at 260 km of rivers to 710 km of roads. The Kikwit outbreak event of 1995 had the most road length with 1196 km. There was no temporal relationship between the year of spillovers and the ratio of river to road network length. At larger spatial scales, such as 300 km × 300 km, the ratio of river length to road length was approximately 1:1 for most outbreaks.
Total cases and outbreak duration
We examined the relationship between the transportation networks and outbreak metrics, specifically the total number of cases and total duration in days for each outbreak, using linear regression models. We conducted two separate analyses: one including all spillover events and one excluding the large Guinea and North Kivu spillover events.
All spillover events
In 150 km × 150 km study areas, the R-squared values indicated that total river network lengths, total road network lengths, total combined road and river network lengths, and total number of intersections each explained between 0 – 12% (p > 0.05) of the variation in the total cases and total duration in days of the outbreaks. Within the 100 km × 100 km study area, the R-squared values indicated that the total combined road and river network lengths explained 28% (p = 0.0236) of the variation in total cases and 35% (p = 0.0101) of the variation in total duration in days ( Fig. S2 ). Within the 300 km × 300 km study area, the R-squared values indicated that the total combined road and river network lengths explained 23% (p = 0.0434) of the variation in total cases. We found no significant relationships between transportation networks and outbreak metrics for the remaining study areas (see Table S2 for complete results).
Excluding Guinea and North Kivu spillover events
We found that the two largest and longest EBOV, the Guinea (2013; 28652 cases in 888 days) and North Kivu (2018; 3481 cases in 694 days) events ( Table 1 ), occasionally obscured the relationship between transportation networks and both the total cases and total duration in days. When excluding these outbreaks in 150 km × 150 km study areas, the R-squared values indicated that total river network lengths, total road network lengths, total combined road and river network lengths, and total number of intersections each explained between 4 – 25% (p > 0.05) of the variation in the total cases and total duration in days of the outbreaks. Within the 200 km × 200 km study areas, the R-squared values indicated that the total combined road and river network lengths and the total road network lengths each explained 29% (p = 0.0320) of the variation in the total cases. Additionally, the total combined road and river network lengths explained 28% (p = 0.0354) of the variation in outbreak duration within this study area. When excluding the two largest outbreaks in our analyses of 300 km × 300 km study areas, the R-squared values indicated that the total number of intersections explained 27% (p = 0.0380) of the variation in the total cases and the total river length explained 57% (p = 0.0007) of the variation in the total duration. See Table S3 for complete R-squared values.
The first 100 days of each outbreak
We examined the relationship between the transportation networks and the total cases in the first 100 days of each outbreak ( Table 1 and Table S2 – 5 ). The total combined road and river network lengths were strongly correlated with the total number of intersections for all of the study areas examined: 50 km × 50 km, 100 km × 100 km, 150 km ×150 km, 200 km × 200 km, and 300 km × 300 km (r = 0.81 – 0.94). We found that the total combined road and river network lengths, explained between 35 – 56% of the variance of the total cases in the first 100 days across all outbreaks for each of the study areas examined: 50 km × 50 km, 100 km × 100 km, 150 km ×150 km, 200 km × 200 km, and 300 km × 300 km ( Table S2 ). We observed similar results with the number of intersections and the total cases in the first 100 days for each of the study areas. The total road network lengths explained between 27 – 36% of the variance of the total cases in the first 100 days of an outbreak for the following study areas: 150 km × 150 km, 200 km × 200 km, and 300 km × 300 km. The total river network lengths explained between 2 – 22% (p > 0.05) of the variance of the total cases in the first 100 days of an outbreak for each of the study areas examined. The strongest relationship was between the total combined road and river network lengths and the total cases in the first 100 days for each of the study areas examined.
In 150 km × 150 km study areas, the R-squared values indicated that the total combined road and river network lengths and total road length explained 54% (p = 0.0005) and 30% (p = 0.0183) of the variation in the total cases in the first 100 days of each outbreak, respectively ( Fig. 3 ). The total number of intersections against the total cases in the first 100 days is shown in Fig. S3 . The total river network length in this study area explained less than 3% of the variation in the number of cases during the first 100 days of each outbreak. See Table S2 for complete R-squared values.
Multiple regression analysis
We found that the total road network lengths and the total river network lengths added separately to a multiple regression model explained 54% (p = 0.0030) of the variation in the number of cases during the first 100 days of each outbreak. These results are similar to those that we observed when we analyzed the relationship between total combined road and river network lengths and total cases in the first 100 days using a linear regression model, as described above. The R-squared values also indicated that the total road network length alone explained 30% (p = 0.0183) of the variation in the total cases in the first 100 days of each outbreak.
Complete R-squared values for the transportation networks against outbreak measures including total cases, total duration in days, and total cases in the first 100 days at each study area is found in Table S2 . Additionally, Fig. S4a – d shows the total combined road and river network lengths against the total cases in the first 100 days for the following study areas: 50 km × 50 km, 100 km ×100 km, 200 km × 200 km, and 300 km × 300 km.
Subsequent outbreaks
Each of the 4 outbreaks that were classified as ‘subsequent outbreaks’ reported fewer total cases and a shorter duration in days than the preceding outbreak that occurred in the same or similar location. Each subsequent outbreak occurred within 2 years and < 60 km of the first outbreak. The shortest elapsed time between an initial and subsequent spillover event was 268 days while the longest elapsed time was 471 days. The subsequent outbreaks reported between 15 and 232 fewer cases than their preceding outbreaks. The subsequent outbreaks were between 15 and 108 days shorter in total duration than their associated initial outbreaks. Following the initial Mbomo outbreak in February 2003, the subsequent outbreaks in Mbomo in November 2003 and Etoumbi in May 2005, respectively, reported approximately 75% and 90% fewer cases compared to the total cases reported in the first outbreak. Likewise, the total duration of these subsequent outbreaks decreased by over 40% and 50%, respectively, compared to the duration of the initial spillover event ( Table 1 , Table S4 ). | Discussion
This study provides a novel data resource and novel analyses of transportation infrastructure surrounding all documented EBOV spillover events through 2020. All 18 of these events occurred within Central and West Africa. We examined the relationship between transportation infrastructure, which we used as a proxy for population mobility and connectivity, and outbreak measures. For outbreak measures, we considered the total number of cases in each outbreak, the total duration of each outbreak measured in days, and the number of cases reported in the first 100 days of each outbreak. We observed that the transportation networks had a stronger relationship with the number of cases in the first 100 days of an outbreak compared to the outbreak totals. Transportation network characteristics did not consistently show a significant relationship with the total cases or the total duration in days for each outbreak, likely because additional factors, including outbreak management, become determining factors in outbreak size after 100 days. These results suggest that transportation networks may play an important role in the transmission dynamics of EVD in the early stages of an outbreak.
Transportation networks connect people across locations. These networks both influence and are influenced by human mobility. Some EBOV-infected patients travel to healthcare centers and unknowingly transmit the virus along the way while others transmit the virus to healthcare personnel or visitors at the hospital [ 45 ] while others believe they are uninfected and travel to escape an outbreak. The geographic spread of EVD following a spillover event is determined by the hosts’ ability to travel. This is consistent with the outbreaks that reported the greatest numbers of cases (e.g., West Africa) and large geographic spread [ 46 ] .
The influence of road networks of river networks on movement depends on various factors. Previous studies found that small villages are especially subject to poor road upkeep which limits the utilization of vehicles and hinders efficient travel [ 47 ] . Individuals in these areas will instead use animals like donkeys or walking as means of travel along roads [ 11 ] . River networks are also commonly used for transportation. Rivers connect major cities, for example Kinshasa and Kisangani in DRC, and Brazzaville and Kinshasa between DRC and RoC and also provide critical links between many smaller settlements [ 10 , 12 , 48 ] .
To examine the immediate importance of movement and connectivity as it relates to outbreak severity, we conducted a targeted analysis on the first 100 days of each outbreak. This approach provided insight into the short-term effects of transmission and early outbreak management rather than the downstream effects of prolonged exponential outbreak growth and long-term outbreak management. This approach highlighted the potential importance of early interventions in outbreak containment. In addition, the largest outbreaks (West Africa 2013 and North Kivu 2018) were no longer outliers when using this approach ( Fig. 1b ). This analysis allowed for a comprehensive examination of the early mechanisms that underlie each outbreak despite the sizable differences in the total number of cases and duration in days across outbreaks. Due to the very high proportion of susceptible individuals to EBOV infection in most populations, movement patterns early in an outbreak are extremely important in determining outbreak transmission dynamics. We find strong positive correlations between the transportation networks and EVD incidence in the first 100 days of each outbreak at various study areas. These findings provide insight into the relationship between human mobility and the spread of Ebola virus and further suggest that rapid response measures along transportation networks surrounding spillover events during the initial stages of an outbreak have the potential to reduce outbreak size.
Previous research identified an association between delayed recognition of EVD and longer, larger outbreaks. We observed similar results in the context of a small number of subsequent outbreaks which occurred in the ROC, Gabon, and the DRC ( Fig. 1a ). We found that even when spillover events occur in close spatial proximity and close together in time, and therefore rely on very similar or identical road and river networks, the subsequent outbreak resulted in less severe outbreaks in terms of total cases, total duration (in days), and spatial spread. This was most strongly demonstrated by the two subsequent spillover events in Mbomo in November 2003 and Etoumbi in May 2005 that followed the initial Mbomo outbreak in February 2003. With each of these subsequent spillover events, we observe fewer total cases and shorter duration, indicating a favorable trend towards efficient containment and control of the outbreaks. Excluding subsequent events from the regression analysis strengthened the positive association between transportation networks and both total cases and total outbreak duration for many scenarios ( Table S2 and S4 ). The responses to subsequent spillover events, especially those that occur shortly after the first spillover, may have benefited from experiential knowledge and outbreak response infrastructure developments, such as improved surveillance and diagnostics. This suggests that preparedness and rapid response, including early outbreak recognition, maybe be able to overcome the impact of movement in outbreak trajectory. We performed an additional analysis between transportation networks and total cases and total outbreak duration in which we excluded both subsequent events and outlier events. We found that the positive association between transportation networks and total cases and total duration was often further increased when excluding both subsequent and outlier events ( Table S2 and S5 ).
To control future outbreaks, management plans can benefit from the inclusion of human movement and population connectivity, particularly in the early stages of outbreaks. Land use change (e.g., deforestation due to logging, mining, and development) has drastically increased pressures on ecosystems in parts of Central and West Africa. One study found that the index cases in humans of EVD outbreaks (2004–2014) occurred mainly in forest fragmented hotspots [ 49 ] . As population mobility and human settlements expand due to an increased demand for services and resources [ 50 ] , forest fragmentation and wildlife disruption are an imminent concern with potentially deadly consequences.
In addition to enabling the geographic spread of pathogen transmission following a spillover event, movement may also assist in the occurrence of spillover events. Areas along rivers can have an abundance of fruit trees, which are common feeding sites for migrating fruit bats. For example, during the 2007 EBOV spillover event in Luebo, DRC, fruit bats suspected of being EBOV reservoirs ( H. monstrosus and E. franqueti ) migrated along the Lulua River near Luebo, stopping at settlements along the river, where residents handled them [ 51 ] . The influence of transportation networks and population mobility on the emergence and transmission of EBOV and other transmissible pathogens is wide-reaching and would benefit from further exploration.
Control efforts, particularly in the early days of an outbreak are critical in infectious disease outbreak management, including EBOV. These efforts include strengthening infection prevention & control (IPC) programs (this may include testing and contact tracing along roads and rivers), strengthening disease surveillance systems, improving health infrastructure, and developing accessible public health communications [ 52 ] . Additional local factors that must be considered in control efforts include political and economic instability and conflicts, displaced populations, and large-scale population movements [ 50 ] . Experiential knowledge gained from managing spillover events of other zoonotic pathogens can also serve to help guide control efforts within the early stages of an outbreak in the face of future threats. | Contributions
NB and VM were responsible for the concept of the study. NB, AG, and BN completed the analyses. VM, MJM, and SNS provided relevant insight, data, and feedback. HDR provided support for map discovery, archiving, and digitization. NB and AG prepared the first draft of the manuscript. All authors reviewed and approved the manuscript before submission.
Human movement drives the transmission and spread of communicable pathogens. It is especially influential for emerging pathogens when population immunity is low and spillover events are rare. We digitized serial printed maps to measure transportation networks (roads and rivers) in Central and West Africa as proxies for population mobility to assess relationships between movement and Ebola transmission. We find that the lengths of roads and rivers in close proximity to spillover sites at or near the time of spillover events are significantly correlated with the number of EVD cases, particularly in the first 100 days of each outbreak. Early management and containment efforts along transportation networks may be beneficial in mitigation during the early days of transmission and spatial spread for Ebola outbreaks.
Classification: | Supplementary Material | Acknowledgments
We thank Kelsee Baranowski, Christina Faust, and Ephraim Hanks for their valuable feedback and insight in the development of this manuscript.
Funding
This study was supported by the joint National Institutes of Health (NIH) - National Science Foundation (NSF) - National Institute of Food and Agriculture (NIFA) Ecology and Evolution of Infectious Disease (award R01TW012434 to NB), NSF RAPID (award 2202872 to NB), and the Intramural Research Program of the National Institute of Allergy and Infectious Diseases (NIAID), NIH (1ZIAAI001179-01 to VM). SNS was partly supported by funding to Verena ( viralemergence.org ) from the NSF (award BII 2021909 and BII 2213854). Funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Data availability.
Hard copies of all Michelin maps used in this study are available at the Donald W. Hamer Center for Maps and Geospatial Information in the Penn State University Pattee Library. Digital copies and code used for analyses are available at https://github.com/bhartilab/EbolaMaps . | CC BY | no | 2024-01-16 23:49:21 | medRxiv. 2023 Dec 19;:2023.12.18.23300175 | oa_package/2e/97/PMC10775320.tar.gz |
|
PMC10775321 | 38196589 | Methods
Over 20,000 primary cancer and matched normal samples from 33 different cancer types are molecularly described by The Cancer Genome Atlas (TCGA). The National Human Genome Research Institute and National Cancer Institute started working together on this project in 2006. Over 2.5 petabytes of genomic, epigenomic, transcriptomic, and proteomic data have been produced by TCGA ( 10 ). In the current study we used the TCGA colon and rectal data set (COAD) and the GDC TCGA Lower Grade Glioma (LGG) data set.
To access TCGA data we used the Xena platform ( 11 ) and cBioportal ( 12 ). Statistical analysis was done with SPSS v26. | Results
Table 1 contains demographics and clinical characteristics of lower grade glioma and colorectal cancers studied.
Figure 1 shows age at diagnosis of lower grade glioma and colorectal cancers studied.
Figure 2 shows overall survival and disease-free survival of lower grade glioma and colorectal cancers studied.
Figure 3 shows genetic separation of lower grade gliomas into two disease groups. Each row contains data from a single sample. Row order is determined by sorting the rows by their column values. Each gray or white band in column A indicates 10 samples. Loss of chromosome arms 1p and 19q are indicated by blue blocks, columns B and C, 166 of 506 patients. TP53 and ATRX mutations are indicated in columns D and E. Anaplastic oligodendrogliomas and mixed gliomas (column F) with 1p 19q co-deletions and few or no TP53 or ATRX mutations fall into disease group 1. Anaplastic astrocytomas (orange bands column F) with no 1p 19q co-deletions and many TP53 and ATRX mutations fall into disease group 2. Although TCGA itself has played a pivotal role in developing the 2021 WHO classification (WHO CNS5 classification), its proprietary databases still retain outdated diagnoses which frequently appear incorrect and misleading according to the WHO CNS5 standards ( 13 ). It is better to classify the tumors based on 2021 WHO classification and avoid terms such as anaplastic or mixed gliomas found in TCGA.
Figure 4 shows genetic analysis of chromosomes 1 and 19 in TCGA colorectal data, 616 patients. Note the loss of chromosome arm 1p (column B, lower blue block, left). The 1p loss is associated with histologic code 8140/3, adenocarcinoma not otherwise specified (column F), 150 of 616 patients. No loss of chromosome 19q, like that in lower grade gliomas, has occurred (column C). Unlike in glioma, TP53 mutations are associated with the 1p loss (column D), but ATRX mutations are not (column E).
Figure 5 shows survival of 616 patients with colorectal cancer. The 1p deletion had no effect on survival (p= 0.6, log rank test). | Discussion
Enteric neurons and enteric glial cells are a part of the enteric nervous system, which is sometimes referred to as the “second brain” of the body. This complex network of neurons controls various functions of the gastrointestinal tract, including motility, secretion, and blood flow. Research has shown that there is a connection between enteric neurons and the development of colorectal cancer, although the exact mechanisms are still being studied ( 5 ).
Some potential links between enteric neurons, enteric glia and colorectal cancer include:
Neurotransmitter Signaling: Enteric neurons release neurotransmitters that can influence the behavior of nearby cells. Altered neurotransmitter signaling in the gut might affect the proliferation and survival of colorectal cancer cells ( 14 ).
Inflammation: Inflammation in the gastrointestinal tract is known to be a risk factor for colorectal cancer. Enteric neurons can influence immune responses and inflammation in the gut, potentially contributing to cancer development ( 15 ).
Neural Control of Motility: Abnormal motility in the colon may affect the exposure of colonic cells to carcinogens or influence the ability of the immune system to surveil and eliminate cancerous cells ( 16 ).
Colorectal cancer cells adhere to and migrate along the neurons of the enteric nervous system ( 17 ). Therefore, cancer cells might be expected to pick up mutations from neurons and enteric glial cells during recombination events. We hypothesize that the chromosome 1p deletion in colorectal cancer above is not a chance event and instead was acquired from adjacent enteric glial cells.
A persistent question is why the incidence of glioma of the enteric nervous system is so low. One possible explanation is that enteric glia might have the chromosome 1p deletion and lack the chromosome 19q deletion of CNS gliomas.
The chromosome 1p 19q co-deletion is a favorable prognostic factor in patients with low grade glioma. Chromosome 1p co-deletion may confer better survival in patients with lower grade glioma in part because of loss of the MycBP oncogene, which is important in glioma development ( 18 ).
Evidence exists for a tumor suppressor gene on chromosome 19q associated with astrocytomas, oligodendrogliomas, and mixed gliomas ( 19 , 20 ). Lower grade gliomas ( figure 3 ) had the 1p 19q codeletion or no deletions. None had only the 1p deletion, suggesting that if the 1p deletion occurs alone it may render glial cells more resistant to malignant transformation.
Our study has weaknesses:
chromosome 1p has a huge number of genes. An entire chromosomal arm deletion, in addition to the very different embryology and epidemiology of colorectal cancer versus lower grade glioma, could result in a significant chance of false discovery of this nonspecific mutation.
1p and 19q co-deletion in brain tumors is mediated by an unbalanced t(1;19)(q10;p10) chromosomal translocation ( 21 , 22 ). It is a centric fusion between chromosomes 1 and 19 with subsequent loss of 1p/19q whereas the 1q/19p chromosome is retained. 1p deletion in colon cancer is different. It is of various sizes with a minimum common deleted region of 1p36 ( 23 – 26 ). One group found that the deleted region is between markers D1S199 and D1S234 ( 26 ) whereas another group between markers D1S2647 and D1S2644 ( 27 ). Thus, the two genetic events, 1p and 19q co-deletion in brain tumors and 1p deletion in colon cancer may not be the same and may not be related.
Enteric glial cells were shown to stimulate expansion of colon cancer stem cells and ability to give rise to tumors via paracrine signaling ( 5 , 28 ).
In conclusion, we hypothesize that the chromosome 1p deletion in colorectal cancer is not a chance event and instead is acquired from adjacent enteric glial cells. Moreover, enteric glia might have the chromosome 1p deletion but lack the chromosome 19q deletion of CNS gliomas, making them much less vulnerable to malignant transformation than CNS gliomas. | Dr. Lehrer and Dr. Rheinstein contributed equally to the conception, writing, and data analysis of this study.
Background:
Enteric neurons and enteric glial cells are a part of the enteric nervous system, which is sometimes referred to as the “second brain” of the body. This complex network of neurons controls various functions of the gastrointestinal tract, including motility, secretion, and blood flow. Research has shown that there is a connection between enteric neurons and the development of colorectal cancer, although the exact mechanisms are still being studied.
Methods:
Because of the potential influence of chromosome mutations that may be common to both gliomas and colorectal cancer, we used the Cancer Genome Atlas (TCGA) to examine these mutations.
Results:
166 of 506 lower grade gliomas had the 1p 19q co-deletion. 150 of 616 colorectal cancers had a 1p deletion but no 19q deletion.
Conclusion:
Colorectal cancer cells adhere to and migrate along the neurons of the enteric nervous system. Therefore, cancer cells might be expected to pick up mutations from neurons and enteric glial cells during recombination events. We hypothesize that the chromosome 1p deletion in colorectal cancer above is not a chance event and instead was acquired from adjacent enteric glial cells. Chromosome 1p co-deletion may confer better survival in patients with lower grade glioma in part because of loss of the MycBP oncogene, which is important in glioma development. Enteric glia might have the chromosome 1p deletion but lack the chromosome 19q deletion of CNS gliomas, making them much less vulnerable to malignant transformation than CNS gliomas. Indeed, evidence exists for a tumor suppressor gene on chromosome 19q associated with human astrocytomas, oligodendrogliomas, and mixed gliomas. | Eighty percent of malignant primary brain tumors are gliomas. They arise from mutations affecting neural stem cells or glial cells. One well documented pair of mutations is the chromosome 1p 19q codeletion in lower grade gliomas, a favorable prognostic marker ( 1 ).
Glial cells are present in the brain, central nervous system, and enteric nervous system, a complex network of neurons and accompanying glial cells (enteric glial cells, EGCs) which controls the major functions of the gastrointestinal (GI) tract. These cells play a crucial role in regulating intestinal motility, mucosal barrier function, and immune responses in the gut. Local glial cells may be major contributors to inflammatory pain ( 2 ) and multiple subtypes have been identified ( 3 , 4 ). EGCs resemble brain glia in many ways and can function as intestinal stem cells. They are found within the walls of the entire GI tract ( 5 – 7 ).
Colorectal cancer is a type of cancer that originates in the colon or rectum. It is one of the most common forms of cancer and can be influenced by various genetic and environmental factors. Chromosome mutations can be a contributing factor to the development of colorectal cancer. Mutations in specific genes, such as APC (adenomatous polyposis coli), KRAS, TP53, and others, are commonly associated with colorectal cancer. In one study, deletion of chromosome 1p was detected in 22 of 82 colorectal cancers and conferred a worse prognosis ( 8 ).
Some studies have analyzed the relationship between enteric glial cells and their promotion of the development of colorectal cancer ( 9 ). Because of the potential influence of chromosome mutations that may be common to both gliomas and colorectal cancer, we used the Cancer Genome Atlas (TCGA) to examine these mutations. | This work was supported in part through the computational and data resources and staff expertise provided by Scientific Computing and Data at the Icahn School of Medicine at Mount Sinai and supported by the Clinical and Translational Science Awards (CTSA) grant UL1TR004419 from the National Center for Advancing Translational Sciences. | CC BY | no | 2024-01-16 23:49:21 | medRxiv. 2023 Dec 22;:2023.11.07.23298214 | oa_package/c1/44/PMC10775321.tar.gz |
||
PMC10775324 | 38196612 | Introduction
Non-diabetic chronic kidney disease (CKD) is associated with metabolic dysregulation including disrupted insulin and glucose homeostasis 1 – 3 . Factors contributing to CKD-associated glucometabolic complications include increased inflammation 4 and hyperglucagonemia 5 . Prior studies in CKD using hyperinsulinemic-euglycemic clamp and oral glucose tolerance testing have demonstrated lower insulin clearance and insulin sensitivity that is not compensated for by enhanced insulin secretion, leading to a high prevalence of glucose intolerance 3 . An impaired response of incretin, a key regulator of insulin secretion and glucose homeostasis, could be an important mechanism contributing to inadequate insulin secretion in CKD. However, understanding of how CKD impacts postprandial incretin secretion is limited. This knowledge is key to understanding any potential heterogeneity in response to incretin analogues in the CKD population.
Incretin hormones are secreted by the gut in response to nutrient intake and promote glucose-stimulated insulin secretion 6 . The two main incretin hormones are glucagon-like peptide-1(GLP-1) and glucose-dependent insulinotropic polypeptide (GIP) secreted by the enteroendocrine L and K cells, respectively 7 , 8 . Together, GLP-1 and GIP account for up to 70% of postprandial insulin secretion (incretin effect) in healthy individuals 9 . While patients with type 2 diabetes are thought to have an impaired incretin effect 10 , little is known about the independent effect of CKD on the response of the incretin peptides to nutrient ingestion and the islet endocrine cells’ response to them. Both incretins similarly mediate the gastrointestinal glucose-dependent stimulation of insulin secretion. However, the incretins have opposing effects on glucagon secretion with GLP-1 suppressing 11 , and GIP stimulating glucagon secretion 12 . Whether and how GLP-1 and GIP in combination impact postprandial glucagon suppression in CKD remains unknown. Additionally, understanding the impact of CKD on dipeptidyl peptidase-4 (DPP-4), a ubiquitous enzyme that inactivates incretin hormones, to impact glucagon and insulin release and thus glucose homeostasis is lacking 13 .
The current study investigates postprandial incretin hormone levels and their determinants using a standardized oral glucose tolerance test (OGTT) comparing non-diabetic patients with CKD and controls. We first describe the association of the presence and severity of kidney disease with circulating concentrations of incretin hormones in both fasted and postprandial states. We separately investigate the association of postprandial circulating incretin hormones with insulin, c-peptide, and glucagon levels during an OGTT by CKD status. We hypothesized that non-diabetic CKD is associated with reduced incretin hormone release and impaired glucagon suppression that contribute to glucometabolic complications underlying heightened cardiometabolic risk in CKD. | Methods
Study population and study design:
The Study of Glucose and Insulin in Renal Disease (SUGAR) was a cross-sectional study of moderate-severe non-diabetic CKD. A total of 98 participants were recruited for this study among which 59 had CKD (eGFR < 60 ml/min per 1.73 m 2 ) and 39 were controls (eGFR > 60 ml/min per 1.73 m 2 ), frequency matched on age, sex, and race. Exclusion criteria for both groups included age <18 years, a clinical diagnosis of diabetes, maintenance dialysis or fistula in place, history of kidney transplantation, use of medications known to reduce insulin sensitivity (including corticosteroids and immunosuppressants), fasting serum glucose ≥126 mg/dl, and hemoglobin <10 g/dl. All enrolled participants attended a screening visit, at which eligibility was assessed and written informed consent was obtained. Serum biomarkers of kidney function were measured in fasting blood. A more detailed description of the study design, recruitment, and enrollment has been published previously 3 , 14 .
CKD classification:
Serum creatinine and cystatin C (Gentian) were measured in fasting serum collected immediately prior to the hyperinsulinemic-euglycemic clamp using a Beckman DxC automated chemistry analyzer. Primary analyses used GFR estimated using the CKD-EPI Creatinine-Cystatin C Equation (2012) 15 to follow precedent of the original eligibility criteria, categorizations, and analyses. Sensitivity analyses were performed using the more recent race-neutral CKD-EPI Creatinine-Cystatin C Equation (2021) 16 .
Oral glucose tolerance test and hyperinsulinemic-euglycemic insulin clamp:
A standard 75g OGTT was performed approximately one week after the hyperinsulinemic-euglycemic insulin clamp. Plasma glucose, insulin, total GLP-1, and total GIP concentrations were measured at −10, −5, 0, 30, 60, 90, and 120 minutes. We averaged −10 to 0 time points to generate baseline fasting values. Plasma glucagon levels were measured at 0, 30, and 120 minutes. The postprandial incretin hormone responses were calculated as the area under the curves(AUC) using the trapezoid rule and evaluated both as total AUC (tAUC) and incremental AUC (iAUC). Glucose iAUC and 2-hour plasma glucose were calculated as a measure of glucose tolerance. Insulinogenic index was used to quantify the difference in plasma insulin divided by the difference in plasma glucose from baseline to 30 minutes of the OGTT. Clamp insulin sensitivity and Matsuda index were the primary and secondary measures of insulin sensitivity. Details of the clamp and OGTT procedures have been published previously 17 , 18 .
Measurement of GLP-1, GIP, glucagon, insulin, glucose, C-peptide, DPP-4, and inflammatory biomarkers:
Plasma samples were assayed for total GLP-1 and total GIP using multiplex electrochemiluminescence (Meso Scale Discovery, Rockville, MD, USA). Average intra-run concentration coefficients of variation for GIP and GLP-1 were 8.3% and 2.7% (high control), 4% and 2.5% (medium control), and 11% and 3.6% (low control) respectively. Plasma glucagon was measured by ELISA (Mercodia). DPP-4 antigen concentration was determined by ELISA (eBioscience). Average intra-run concentration coefficients of variation for glucagon were 2.1% (high control), 14% (medium control), and 6.3 %(low control). Blood glucose concentrations were measured using the glucose hexokinase method (Roche Module P Chemistry autoanalyzer; Roche, Basel, Switzerland) and blood insulin concentrations were measured using 2-site immune-enzymometric assay (Tosoh 2000 Autoanalyzer). C-peptide concentrations were determined using a standard double-antibody radioimmunoassay (Diagnostic Products Corporation, Los Angeles, CA, USA). DPP-4 activity was assayed by incubating plasma with a colorimetric substrate, l-glycyl‐l‐prolyl p‐nitroanilide, hydrochloride (Sigma), at 37°C. Serum inflammation biomarkers were measured in the fasting blood. CRP was measured with a Beckman Coulter (USA) DxC chemistry analyzer. Serum TNF-α, IL-6, IFN- γ , and IL-1 β were performed using commercial multiplex electroluminescence assays (Meso Scale Discovery, Rockville, MD, USA). All assays were performed in duplicate.
Covariates:
Demographic and medical history of participants were self-reported. Cardiovascular disease (CVD) was defined as a physician diagnosis of myocardial infarction, stroke, resuscitated cardiac arrest, or heart failure or a history of coronary or cerebral revascularization. The Human Activity Profile (HAP) maximum activity score was used to quantify physical activity. Food intake was recorded using three days of prospective food diaries analyzed with Nutrition Data System for Research software. Body composition was measured by DXA (GE Lunar or Prodigy and iDXA; EnCore Software versions 12.3 and 14.1; GE Healthcare, Waukesha, WI).
Statistical analysis
To compare plasma incretin levels by CKD status during OGTT, we used linear regression adjusted for potential confounders including age, sex, smoking status, fat-free mass, fat mass, calorie intake, physical activity, and CVD. All clinical data was checked for normality. Spearman correlation coefficient was used to evaluate univariable relationship between kidney function and incretin levels during the OGTT. Total and incremental AUCs were used to evaluate total incretin hormone levels and incretin hormone responses during the OGTT, respectively. The rate of acute incretin peripheral response was calculated using the difference of plasma incretin levels at baseline and 30 minutes post OGTT and over time. Linear regression adjusted for confounders was used to investigate the association of CKD status with incretin levels and incretins with measures of insulin resistance, plasma insulin concentrations, and plasma inflammatory biomarkers. Analyses were conducted using R version 4.2.2 19 . Boxplots and scatterplots were made using GraphPad Prism version 10.0.0 (GraphPad Software, Inc., San Diego, California).
Study approval
The procedures in the study and informed consent forms were reviewed and approved by the University of Washington Human Subjects Division (HSD). All participants provided written informed consent. | Results
Characteristics of the study participants.
The study included a total of 98 total participants, of whom 59 had CKD (eGFR <60 ml/min per 1.73 m 2 ) and 39 were healthy controls (eGFR ≥60 ml/min per 1.73 m 2 ). The mean (± SD) age among CKD participants was 63.6 ± 13.9 years, 51% were female, and 22% self-identified as black. Mean (range) eGFR was 37.6 (9.5 to 59.5 ml/min per 1.73 m 2 ) compared to 88.8 (61 to 117 ml/min per 1.73 m 2 ) among controls ( Table 1 ). Compared with controls, participants with CKD were more likely to have cardiovascular disease, to be smokers, be less physically active, have higher body weight, fat mass, and plasma inflammatory markers, and have lower daily calorie intake ( Table 1 ).
Cross-sectional associations with total incretin levels (tAUC) and incretin response (iAUC) during OGTT in the overall cohort.
The mean ± SD incremental and total GLP-1 area under the curve (GLP-1 iAUC and tAUC) during the OGTT were 1464 ± 1460 and 3043 ± 1899 pM × min respectively. The mean incremental and total post-prandial GIP area under the curve (GIP iAUC and tAUC) were 54327 ± 31785 and 68653 ± 36880 pg × min/ml respectively in the overall cohort ( Table 2 ). Both GLP-1 and GIP response (iAUC) were negatively correlated with caloric intake (r=−0.26 and −0.30; P <0.05), lean mass (r=−0.37 and −0.24, P <0.05) and physical activity (r=−0.17, P =0.08 and r=−0.28, P <0.05) in the overall cohort. In comparison we found no association of body composition or age with either total GLP-1 or GIP. In the overall cohort, eGFR was inversely correlated with only total GLP-1 levels (tAUC), but not GLP-1 response (iAUC) ( Figure 1A and 1C ). In the CKD subgroup, eGFR was inversely correlated with both total GLP-1 levels and GLP-1 response (r=−0.37 and r=−0.26 P <0.05). In comparison, eGFR was inversely correlated with both total GIP and GIP response in the overall cohort ( Figure 1B and 1D ). There was no significant or meaningful correlation of eGFR with total GIP (r=0.17, P =0.21) or iAUC (r=0.17, P =0.21) in the CKD subgroup.
CKD was associated with greater fasting plasma incretin levels and varied incretin response during an OGTT.
CKD was associated with a higher fasting GLP-1 levels with a mean of 16.2 ± 11.6 compared to 8.5 ± 3.3 pM among controls ( P <0.01) ( Table 2 , Supplemental Table 1 ). GLP-1 tAUC measured during the OGTT was higher in participants with CKD versus controls ( Table 2 , Figure 2A ). After adjusting for age, sex and race, CKD was associated with a 1192 pM × min higher GLP-1 tAUC (95% CI of 406 to 1978; P <0.01) ( Table 3 ). Adjusting for other clinically relevant covariates only modestly attenuated the magnitude of the association ( Table 3 ). In the final multivariable adjusted model, CKD was associated with a 1100 pM × min higher GLP-1 tAUC (95% CI of 119 to 2080; P =0.03) ( Table 3 ). Despite CKD patients having higher total GLP-1 levels at fasting and during the OGTT, there was no significant difference in GLP-1 response (GLP-1 iAUC) compared to controls ( Tables 2 and Table 3 ).
Mean fasting GIP level was higher among the CKD group with a mean of 134.5 ± 104.1 versus 97 ± 112.6 pg/ml in controls ( P <0.01) ( Table 2 , Supplemental Table 1 ), but the estimated mean difference was not significant after adjusting for potential confounders ( Supplemental Table 1 ). In contrast, both total postprandial GIP level and GIP response were elevated in CKD compared to controls ( Table 2 and Figure 2B ). Adjusting for potential confounders attenuated the estimated association by 24% to an estimated mean difference of 15271 pg × min/ml higher GIP iAUC (95% CI of 387 to 30154; P =0.04) in CKD compared to controls ( Table 3 ).
The rate of acute GIP increase in the first 30 minutes of OGTT was greater in CKD compared to controls. The mean rate of increase in GIP within the first 30 minutes of the OGTT was 249 ± 111 vs 177 ± 101 pg/ml/min in CKD and controls, respectively. CKD patients had an estimated mean 167pg/ml/min greater rate of increase in GIP (95% CI of 50 to 284; P<0.01) compared to controls after adjustment for potential confounders ( Supplemental Table 2 ). Further adjustment for fasting plasma GIP levels did not meaningfully impact estimates of association. In contrast, the CKD patients did not differ meaningfully or significantly in their mean rate of increase in GLP-1 during the first 30 minutes of the OGTT ( Supplemental Table 2 ).
GIP response, but not GLP-1 response was associated with insulinotropic effects during OGTT.
Total postprandial insulin levels during the OGTT did not significantly differ between CKD and controls, whereas C-peptide levels were more consistently greater at each time point in CKD during the OGTT ( Figure 2C and 2D ). No significant differences were observed in insulin response measured by insulin iAUC and insulinogenic index between CKD and controls ( Table 2 ). Similarly, we found no meaningful or significant difference by CKD status in glucose tolerance measured by glucose iAUC ( Table 2 , Figure 2E ). GLP-1 response (GLP-1 iAUC) was not meaningfully or significantly associated with insulin, C-peptide, or glucose iAUCs in the overall cohort ( Supplemental Figure 1A , 1C , and 1E ). In the overall cohort, GIP response (GIP iAUC) was significantly correlated with insulin (r=0.25, P =0.01) and C-peptide response (0.29, P <0.01) but not glucose iAUC (r=−0.03, P =0.78). These correlations were generally weaker in patients with CKD (r=0.21, P =0.12; r=0.24, P =0.07; r=0.03, P =0.92, respectively) compared with controls (r=0.33, P =0.04; r=0.47, P <0.01; r=−0.17, P =0.29, respectively) ( Supplemental Figure 1B , 1D and 1F ).
Plasma glucagon levels were elevated in CKD compared to controls in response to OGTT.
Fasting plasma glucagon levels were not significantly different between CKD and controls ( Table 2 , Supplemental Table 1 , Figure 2F ). During the OGTT plasma glucagon levels were higher at 30 minutes and 120 minutes in CKD compared to controls ( Table 2 , Figure 2F ). After adjusting for baseline glucagon levels, CKD was associated with 0.9 mg/dl higher levels at 30 minutes (95% CI of 0.15, 1.7; P =0.02) and 0.5 mg/dl higher at 120 minutes (95% CI of 0.1 to 0.9; P =0.02) post OGTT. The percent change in glucagon levels from baseline to 30 minutes post OGTT was attenuated in CKD with a median [IQR] of −27% [−11 to −46] versus −38% [−19 to −57] among controls. The percent change from baseline was also modestly attenuated at 2 hours post OGTT among CKD with median [IQR] of −70% [−57 to −80] compared to −78% [−60 to −88] in controls.
The fasting plasma dipeptidyl peptidase-4 (DPP-4) activity and antigen levels were similar between CKD and controls.
The mean fasting plasma DPP-4 antigen levels were similar among CKD and controls (mean ± SD= 56.2 ± 15.3 versus 55.7 ± 15.5 ng/ml; P =0.88) ( Figure 3A ). The mean fasting plasma DPP-4 activity levels were also similar among the two groups (mean ± SD= 28.7 ± 7.3 versus 28.8 ± 6.3 μM/min; P =0.95) ( Figure 3B ).
Greater inflammation was associated with greater incretin levels and incretin response in CKD.
In the overall cohort, plasma TNF-α levels were significantly associated with GIP response, and CRP levels were significantly associated with GLP-1 response ( Supplemental Table 3 ). In the CKD subgroup, greater CRP was also associated with greater GLP-1 response ( Supplemental Table 3 ). Among patients with CKD each 1 mg/dL greater plasma CRP was associated with 0.58 greater pM GLP-1 response (95% CI of 0.37 to 0.8; P <0.01) in CKD ( Supplemental Table 3 ).
Sensitivity analyses using the CKD-EPI creatinine-cystatin C 2021 equation yielded similar outcomes.
Using the race-neutral equation, three CKD participants were reclassified to controls resulting in 42 controls and 56 CKD. The eGFR was similar among CKD and controls compared to the 2012 equation (Table 1) . Despite the modest shift in group assignments, all the above analyses were replicated with the 2021 formula and showed similar outcomes ( Supplemental Table 4 and Supplemental Figure 2 ). | Discussion
Our findings demonstrate that the presence and severity of non-diabetic moderate-severe CKD is associated with greater plasma levels of incretins during fasting and in response to an OGTT. The elevated circulating GLP-1 and GIP levels in the fasting state and postprandial conditions were observed in the absence of any significant difference in fasting glucagon levels, DPP-4 antigen, or activity levels. Acute GIP release and GIP response (iAUC) during the OGTT were significantly higher in CKD compared to controls. The correlation of incretin levels with OGTT stimulated insulin or c-peptide was attenuated in those with CKD compared with controls. Concomitantly, CKD was associated with elevated postprandial plasma glucagon levels and impaired glucagon suppression post OGTT. In CKD, the inflammatory biomarker CRP was associated with elevated incretin response. Overall, our findings show that non-diabetic moderate-severe CKD is associated with greater postprandial incretin levels and an augmented GIP response during OGTT that do not translate into meaningful improvements in insulin, glucose, or glucagon homeostasis.
We found elevated fasting and post-prandial plasma incretin levels in CKD was independent of differences in circulating fasting DPP-4 levels and activity suggesting that these differences are unlikely due to reduced incretin degradation. DPP-4 is considered the predominant enzyme responsible for incretin degradation, however it remains unknown if DPP-4 activity is altered during oral glucose tolerance testing in CKD. It is notable that our findings are consistent with other studies in patients with non-diabetic end-stage renal disease (ESRD). One prior study showed greater GLP-1 levels in response to a high-calorie mixed meal in non-diabetic end-stage renal disease (ESRD) subjects compared to healthy controls 20 while another small study of nine non-diabetic hemodialysis patients and 10 healthy controls found elevated fasting and postprandial total GIP response during a standardized meal 21 . Like these prior studies, we measured only total GLP-1 and GIP and are unable to distinguish the proportion of active from inactive incretin fragments in our CKD patients. Future studies are needed to confirm if the augmented incretin levels reflect parallel increases active GLP-1 and GIP secretion in CKD and the possible influence of the uremic milieu on potential alternative incretin degradation pathways.
In our study, CKD was associated with a greater rate of GIP increase but not GLP-1 increase in the first 30 minutes of OGTT compared to controls ( Supplemental Table 2 ). This difference in the rate of GIP increase between CKD and controls was independent of differences in fasting levels of GIP implying that these differences may be independent of reduced clearance of GIP. Controversy exists regarding the role of renal clearance on incretin response. A prior small case-control study in a select group of patients with more modest kidney disease (mean creatinine clearance 46 ml/min) suggested similar metabolic clearance rates and plasma half-life of intact GLP-1 and intact GIP but prolonged metabolite half-lives with intravenous GLP-1 and GIP infusion in CKD compared to controls 22 . This study was limited by both the lack of any urinary measurements necessary to accurately assess renal clearance and lack of assessment of lean mass known to be reduced in patients with CKD influencing the volume of distribution and confounding estimates of drug clearance. Another study in patients with ESRD treated with dialysis showed no difference in incretin response compared with controls casting doubt on the impact of renal clearance on incretin response suggesting a preserved ability to degrade and eliminate active GLP-1 and GIP and their metabolites in ESRD 23 . More detailed studies are needed to directly assess secretion, elimination, and breakdown of intact incretin hormones and their metabolites across the spectrum of CKD.
Disruption of postprandial incretin hormone response (iAUC) in CKD appeared to influence downstream insulin, c-peptide and glucagon homeostasis during the OGTT. In healthy adults, GIP is considered more strongly insulinotropic than GLP-1 24 . Consistent with these findings we found a stronger positive correlation between GIP response and insulin/C-peptide compared to GLP-1. Furthermore, we noted that this correlation between GIP response and insulin/C-peptide was noticeably weaker in patients with CKD compared to controls. In comparison, we found no meaningful correlation of GLP-1 with insulinotropic response. Our findings expand on those of prior studies suggesting that non-diabetic patients with CKD demonstrate a blunted insulinotropic effect of incretins akin to patients with type 2 diabetes and normal kidney function 25 , 26 . However, CKD patients appeared to have numerically greater baseline-corrected insulin response (insulin iAUC) reflecting reduced insulin clearance 3 and a similar acute insulin response estimated by the insulinogenic index compared to controls ( Table 2 ). This may suggest that altered glucose homeostasis in CKD patients may be attributed to inadequate augmentation of the insulin response by incretin hormones (especially GLP-1) or resistance to insulin’s actions on peripheral tissues. Our findings are consistent with results from a randomized double-blind study that also showed that non-diabetic ESRD patients exhibit reduced incretin action on insulin production of both GLP-1 and GIP despite adequate insulin response during IV glucose stimulation 27 . While mechanistic studies of CKD in 5/6 th nephrectomized mice have observed impaired β-cell insulin secretion in response to glucose 28 , none have specifically investigated β-cell resistance to GIP activity on insulin secretion. These findings motivate mechanistic studies to investigate if disruption in the incretin response to carbohydrate consumption in non-diabetic CKD reflects resistance to incretin hormones, especially in the β-cells of the endocrine pancreas where GLP-1 and GIP receptors are abundantly expressed 29 .
The attenuated suppression of glucagon during the OGTT in non-diabetic moderate-severe CKD observed in our study also suggests potential disruption of alpha cell response to incretins in CKD. Despite declines in glucagon levels during the OGTT in both CKD and controls, postprandial glucagon levels remained significantly higher in the CKD group compared to controls. These findings are in line with other studies of patients with type 2 diabetes and non-diabetic patients with ESRD 5 , 23 , 30 – 32 . It suggests an altered counterregulatory balance between GIP induction and GLP-1 suppression of alpha cell glucagon production in CKD during OGTT-induced hyperglycemia. Sustained and elevated postprandial glucagon levels could have direct adverse impacts on glycemic control and amino acid catabolism contributing to muscle wasting in patients with CKD 33 – 35 . Further studies are needed to assess the factors contributing to postprandial glucagon hypersecretion and inadequate suppression and its contribution to metabolic dysregulation in CKD.
Inflammation was identified as a contributing factor for heightened incretin response to OGTT. We found that plasma C-reactive protein (CRP) (a marker for systemic inflammatory burden) was significantly associated with GLP-1 response during OGTT in CKD independent of other factors ( Table 4 ). The association of inflammatory biomarkers including CRP and IL-6 with GLP-1 levels has been reported in other observational studies 36 – 38 . Evidence from studies of patients in the intensive care unit show a significant association of greater inflammatory biomarkers levels including IL-6 and CRP and GLP-1 36 . This suggests a crosstalk between inflammatory status associated with CKD and glucose metabolism regulation through gut-driven incretin response. Interestingly the contrary has been observed with administration of exogenous incretin mimetic therapies associated with a strong anti-inflammatory response. Studies have shown long-term incretin-based therapies significantly decreases in circulating proinflammatory cytokines, including IL-6, TNF-α, IL-1 β , and MCP-1 39 – 41 . Future studies are needed to determine the biological mechanism linking elevated endogenous incretin levels and systemic inflammation in CKD and if treatment with incretin analogues may influence inflammation and catabolism in CKD.
Our study had notable strengths and limitations. First, we recruited a relatively large group of well-characterized non-diabetic CKD participants across the spectrum of moderate-severe CKD including measures of body composition and lifestyle factors. Second, we used an OGTT to comprehensively measure gut-derived incretin hormones, glucagon, insulin, and glucose. Third, we employed a rigorous analysis method adjusting for a wide range of potential confounders in the association of CKD with circulating incretin levels and incretin response to oral glucose. Our study was not without limitations. First, our assays measured total GLP-1 and GIP levels in the plasma, so the proportion of active from the total GLP-1 and GIP and their renal clearance was not directly measured. Second, despite normal fasting glucose levels, both controls and CKD patients included individuals with impaired glucose tolerance (IGT) defined by 2-hour glucose level 140mg/dL or above. However, the inclusion of individuals with IGT in our control group may suggest that the estimated differences in incretin levels and response are conservative. Third, serial blood sample collections during OGTT and clamp were acquired without the addition of a DPP-4 inhibitor which may have impacted the levels of detected glucagon, GLP-1 and GIP. We addressed this by measuring both the plasma fasting DPP-4 antigen levels and its activity and found similar antigen and activity levels among both groups.
In conclusion, non-diabetic CKD is associated with disruption of incretin homeostasis and evidence of attenuated incretin effects on insulin, C-peptide, and glucagon secretion. These changes may contribute to the metabolic dysregulation associated with kidney disease and reveal a potential role for incretin-mimetics to address the attenuated incretin effects observed in our study. Indeed, a recent pharmacokinetic study of combination GLP-1 and GIP in the form of single-dose tirzepatide, a dual GLP-1 and GIP receptor agonist, showed similar drug clearance and tolerability in healthy controls compared to patients across all stages of CKD, including ESRD 42 . Studies are needed to investigate the differential efficacy of GLP-1 and GIP single and dual agonist on insulin, glucose and glucagon homeostasis and links to outcomes in non-diabetic CKD. | AA and JG contributed equally to this work.
BPC and BR contributed equally to this work.
Author Contributions
The conceptualization was contributed by AA, BR, BPC, and IHDB. The methodology was contributed by AA, BR, BPC, BC, SF and IHDB. The formal analysis was conducted by AA and SF. The investigation was performed by BR, JG, BPC, and IHDB. Resources were contributed by BR, BPC, JG, and IHDB. Data curation was performed by AA, SF, MT, and LRZ. The original draft was written by AA and BR. The review was written and edited by AA, BR, BL, BJB, JG, JEN, BE, IHDB, BJB, JH, BPC, MT and LFB. Visualization was contributed by AA and SF. Supervision was carried out by BR, BRK, and BPC. Project administration was contributed by BR, JG, IHDB, and BPC. Funding acquisition was contributed by BR, JG, BPC, and IHDB.
Background:
Incretins are regulators of insulin secretion and glucose homeostasis that are metabolized by dipeptidyl peptidase-4 (DPP-4). Moderate-severe CKD may modify incretin release, metabolism, or response.
Methods:
We performed 2-hour oral glucose tolerance testing (OGTT) in 59 people with non-diabetic CKD (eGFR<60 ml/min per 1.73 m 2 ) and 39 matched controls. We measured total (tAUC) and incremental (iAUC) area under the curve of plasma total glucagon-like peptide-1 (GLP-1) and total glucose-dependent insulinotropic polypeptide (GIP). Fasting DPP-4 levels and activity were measured. Linear regression was used to adjust for demographic, body composition, and lifestyle factors.
Results:
Mean eGFR was 38 ±13 and 89 ±17ml/min per 1.73 m 2 in CKD and controls. GLP-1 iAUC and GIP iAUC were higher in CKD than controls with a mean of 1531 ±1452 versus 1364 ±1484 pMxmin, and 62370 ±33453 versus 42365 ±25061 pgxmin/ml, respectively. After adjustment, CKD was associated with 15271 pMxmin/ml greater GIP iAUC (95% CI 387, 30154) compared to controls. Adjustment for covariates attenuated associations of CKD with higher GLP-1 iAUC (adjusted difference, 122, 95% CI −619, 864). Plasma glucagon levels were higher at 30 minutes (mean difference, 1.6, 95% CI 0.3, 2.8 mg/dl) and 120 minutes (mean difference, 0.84, 95% CI 0.2, 1.5 mg/dl) in CKD compared to controls. There were no differences in insulin levels or plasma DPP-4 activity or levels between groups.
Conclusion
Incretin response to oral glucose is preserved or augmented in moderate-severe CKD, without apparent differences in circulating DPP-4 concentration or activity. However, neither insulin secretion nor glucagon suppression are enhanced.
Graphical Abstract
| Supplementary Material | Acknowledgments
We thank Anthony Dematteo at Vanderbilt University who measured plasma DPP-4 antigen and activity levels. We would like to express our sincere gratitude to Steven Kahn for his valuable feedback and insightful comments on this manuscript. We thank all the participants in the SUGAR cohort for their contributions to this investigation.
Funding
Funding for this study was provided by an unrestricted grant from the Northwest Kidney Centers and R01DK087726 (IHDB), R01DK087726-S1 (IHDB), R01DK129793 (BR), R01DK087726, R01DK087726-S1, K01 DK102851 (JAA, IHDB), R01DK125794 (JLG), K24 DK096574 (TRZ), R56DK124853 (BPC), and P30 DK017047 (University of Washington Diabetes Research Center) and Dialysis Clinics Incorporated, C-4122 (BR).
Data sharing statement
Deidentified data, which have been stripped of all personal identification and information, will be made available to share upon request as part of the research collaboration. | CC BY-ND | no | 2024-01-16 23:49:21 | medRxiv. 2023 Dec 18;:2023.12.15.23300050 | oa_package/fc/85/PMC10775324.tar.gz |
|
PMC10775342 | 38196593 | Summary
The R644C variant of lamin A is controversial, as it has been linked to multiple phenotypes in familial studies, but has also been identified in apparently healthy volunteers. Here we present data from a large midwestern US cohort showing that this variant associates genetically with hepatic steatosis, and with related traits in additional publicly available datasets, while in vitro testing demonstrated that this variant increased cellular lipid droplet accumulation. Taken together, these data support this LMNA variant’s potential pathogenicity in lipodystrophy and metabolic liver disease. | Metabolic dysfunction-associated steatotic liver disease (MASLD) and, when associated with lipotoxicity and inflammation, metabolic dysfunction-associated steatohepatitis (MASH) together represent a genetically and phenotypically diverse entity that is now the most common liver disease in the United States and for which there is no approved medical therapy 1 , 2 . Therefore, it is imperative to define the full spectrum of its pathogenesis to facilitate both the development of novel therapies and their delivery to those who are most likely to benefit. A novel and relatively under-explored area in MASLD/MASH is the role of the nuclear envelope and lamina; variants in LMNA , encoding A-type nuclear lamins, are implicated in diverse diseases including progeria, muscular dystrophy, and lipodystrophy syndromes that include insulin resistance and early-onset MASH with progression to cirrhosis 3 , 4 . Therefore, a fuller understanding of nuclear lamina-related liver disease may provide valuable insights into MASH generally; however, the mechanisms of LMNA -related liver disease are largely obscure, and some LMNA variants exhibit significant variability in phenotype and penetrance within and between families. The LMNA R644C variant, which alters proteolytic processing of lamin A 5 , has been controversial and appears to be particularly variable, with reported linkages to several distinct phenotypes but also identification in healthy volunteers 6 , 7 . Therefore, we sought to clarify the potential pathogenicity of this variant in MASLD by determining its genetic association with hepatic steatosis in a large single-center US cohort and its functional effects on lipid droplet accumulation in vitro .
We tested rs142000963 (g.156138719 C>T; LMNA p.R644C) for its effect on hepatic steatosis in the Michigan Genomics Initiative (MGI) cohort (>57,000 individuals) 8 , 9 . Natural language processing of pathology and radiology reports was used to identify cases with hepatic steatosis (n=5,856) on liver biopsy and/or imaging; participants not classified as cases were considered controls (n=51,166). All MGI participants had been genotyped via the Illumina HumanCoreExome array; overall rs142000963 minor allele frequency (rs142000963-T) was 0.002. Association analysis was performed in SAIGE v0.29 with steatosis as the outcome, controlling for age, age 2 , sex, and the first 10 principal components in an additive genetic model. We found that LMNA R644C positively associated with hepatic steatosis, with an odds ratio of 1.7 ( P =0.02; Table 1 ). The strength and significance of the association did not vary between the all-ancestry MGI cohort (N=57,022) and the European ancestry-only cohort (n=51,550). To determine whether rs142000963-T might predispose to more advanced liver disease in addition to hepatic steatosis, a phenome-wide association study (PheWAS) was performed in the MGI dataset using rs142000963-T as the variant of interest. We found that rs142000963-T significantly associated with hepatic decompensation – development of ascites ( P =0.002, odds ratio = 5.0; Supplementary Table 1 ) – which remained significant after Benjamini-Hochberg (with false-discovery rate of 0.05) or Bonferroni correction for simultaneous testing (all liver- related phenotypes listed in Supplementary Table 2 ). Weaker associations, which were not significant after Benjamini-Hochberg correction, were seen with undergoing liver transplant ( P =0.03, odds ratio = 10.0), and acute or subacute hepatic necrosis ( P <0.05, odds ratio = 16.4). Consistent with its proposed role as an atypical or incompletely penetrant lipodystrophy allele 7 , PheWAS of publicly available data from larger datasets via the Type 2 Diabetes Knowledge Portal 10 (T2DKP, https://t2d.hugeamp.org/ ) revealed strong associations between rs142000963-T and extrahepatic MASLD/MASH-related anthropometric, glycemic, and lipid-related traits including waist-to-hip ratio ( P =0.001), type 2 diabetes ( P =0.004), higher hemoglobin A1c ( P =0.001), and decreased HDL ( P =0.004); Supplementary Table 3 . These associations remained significant after Benjamini-Hochberg correction for 45 such phenotypes with false-discovery rate of 0.05; tested phenotypes are listed in Supplementary Table 4 .
The R644C variant of lamin A has been shown in vitro to alter its proteolytic processing by the zinc-dependent protease ZMPSTE24, but the functional impact of this altered processing has been unclear, as this variant has been linked to disparate laminopathy phenotypes 5 – 7 . Given the results of our GWAS and PheWAS analyses, we sought to address whether the steatosis-promoting effects of rs142000963-T could be hepatocyte-autonomous. To address this, mCherry-tagged wild-type (WT) or R644C lamin A was expressed in Huh7 human hepatoma cells, and lipid accumulation was determined by fluorescence microscopy with a lipid-binding fluorophore. Relative to WT lamin A, cells expressing lamin A R644C demonstrated significantly increased lipid droplet accumulation, without ( Figure 1A ) or with ( Figure 1B ) oleic acid supplementation (quantitation shown in Figure 1C ). These functional data corroborate our genetic data and support the pathogenicity of rs142000963-T in LMNA -related lipodystrophy and MASLD/MASH; in addition, they suggest the possibility of hepatocyte-autonomous lipid accumulation in vivo .
In summary, rs142000963-T ( LMNA R644C) significantly associated with hepatic steatosis, and to a lesser extent with liver-related events in a large midwestern US cohort, as well as with MASLD-related metabolic traits in large publicly available datasets via T2DKP; moreover, it increased lipid droplet accumulation in Huh7 cells. These data provide genetic support, and direct functional evidence, for the pathogenicity of rs142000963-T / LMNA R644C in metabolic laminopathies and suggest that its incomplete penetrance may be due, at least in part, to genetic modifiers that have not yet been defined.
Supplementary Material | Grant Support:
This work was supported by K08DK120948 (G.F.B.) and the University of Michigan Department of Internal Medicine. X.D., Y.C., and E.K.S. are supported in part by R01DK106621 (E.K.S.), R01DK107904 (E.K.S.), R01DK128871 (E.K.S.), R01DK131787 (E.K.S.), the University of Michigan Department of Internal Medicine, and a University of Michigan MBioFAR award. | CC BY | no | 2024-01-16 23:49:21 | medRxiv. 2023 Dec 22;:2023.12.20.23300290 | oa_package/56/2f/PMC10775342.tar.gz |
|||||
PMC10775343 | 38196747 | Introduction
The overarching goal of drug discovery is to generate chemicals with specific functionality through the design of chemical structure ( Li & Kang, 2020 ). Functionality, often in the context of drug discovery, refers to the specific effects a chemical exhibits on biological systems (i.e., vasodilator, analgesic, protease inhibitor), but it is applicable to materials as well (i.e., electroluminescent, polymer). Computational methods often approach molecular discovery through structural and empirical methods such as protein-ligand docking, receptor binding affinity prediction, and pharmacophore design ( Corso et al., 2022 ; Trott & Olson, 2010 ; Wu et al., 2018 ; Yang, 2010 ). These methods are powerful for designing molecules that bind to specific protein targets, but at present they are unable to explicitly design for specific organism-wide effects. This is largely because biological complexity increases with scale, and many whole-body effects are only weakly associated with specific protein inhibition or biomolecular treatment ( Drachman, 2014 ).
Humans have long been documenting chemicals and their effects, and it is reasonable to assume functional relationships are embedded in language itself. Text-based functional analysis has been paramount for our understanding of the genome through Gene Ontology terms ( Consortium, 2004 ). Despite its potential, text-based functional analysis for chemicals has been largely underexplored. This is in part due to the lack of high-quality chemical function datasets but is more fundamentally due to the high multi-functionality of molecules, which is less problematic for genes and proteins. High-quality chemical function datasets have been challenging to generate due to the sparsity and irregularity of functional information in chemical descriptions, patents, and literature. Recent efforts at creating such datasets tend to involve consolidation of existing curated descriptive datasets ( Wishart et al., 2023 ; Degtyarenko et al., 2007 ). Similarly, keyword-based function extraction partially solves the function extraction problem by confining its scope to singular predetermined functionality, but it fails at broadly extracting all relevant functions for a given molecule ( Subramanian et al., 2023 ). Given their profound success in text summarization, Large Language Models (LLMs) may be ideal candidates to broadly extract functional information of molecules from patents and literature, a task that remains unsolved ( Brown et al., 2020 ; OpenAI, 2023 ; Touvron et al., 2023 ). This is especially promising for making use of the chemical patent literature, an abundant and highly specific source of implicit chemical knowledge that has been largely inaccessible due to excessive legal terminology ( Senger, 2017 ; Ashenden et al., 2017 ). This may allow for the creation of a large-scale dataset that effectively captures the text-based chemical function landscape.
We hypothesize that a sufficiently large chemical function dataset would contain a text-based chemical function landscape congruent with chemical structure space, effectively approximating the actual chemical function landscape. Such a landscape would implicitly capture complex physical and biological interactions given that chemical function arises from both a molecule’s structure and its interacting partners ( Martin et al., 2002 ). This hypothesis is further based on the observation that function is reported frequently enough in patents and scientific articles for most functional relationships to be contained in the corpus of chemical literature ( Papadatos et al., 2016 ). To evaluate this hypothesis, we set out to create a Chemical Function (CheF) dataset of patent-derived functional labels. This dataset, comprising 631K molecule-function pairs, was created using an LLM- and embedding-based method to obtain functional labels for approximately 100K molecules from their corresponding 188K unique patents. The CheF dataset was found to be of high quality, demonstrating the effectiveness of LLMs for extracting functional information from chemical patents despite not being explicitly trained to do so. Using this dataset, we carry out a series of experiments alluding to the notion that the CheF dataset contains a text-based functional landscape that simulates the actual chemical function landscape due to its congruence with chemical structure space. We then demonstrate that this text-based functional landscape can be harnessed to identify drugs with target functionality using a model able to predict functional profiles from structure alone. We believe that functional label-guided molecular discovery may serve as an orthogonal approach to traditional structure-based methods in the pursuit of designing novel functional molecules. | Methods
Database creation.
The SureChEMBL database was shuffled and converted to chiral RDKit-canonicalized SMILES strings, removing malformed strings ( Weininger, 1988 ; Papadatos et al., 2016 ; Landrum et al., 2013 ). SMILES strings were converted to InChI keys and used to obtain PubChem CIDs ( Kim et al., 2023 ). To minimize costs and prevent label dilution, only molecules with fewer than 10 patents were included. This reduced the dataset from 32M to 28.2M molecules, a 12% decrease. A random 100K molecules were selected as the dataset. For each associated patent, the title, abstract, and description were scraped from Google Scholar and cleaned.
The patent title, abstract, and first 3500 characters of the description were summarized into brief functional labels using ChatGPT (gpt-3.5-turbo) from July 15th, 2023, chosen for low cost and high speed. Cost per molecule was $0.005 using gpt-3.5-turbo. Responses from ChatGPT were converted into sets of labels and linked to their associated molecules. Summarizations were cleaned, split into individual words, converted to lowercase, and converted to singular if plural. The cleaned dataset resulted in 29,854 unique labels for 99,454 molecules. Fetching patent information and summarizing with ChatGPT, this method’s bottleneck, took 6 seconds per molecule with 16 CPUs in parallel. This could be sped up to 3.9 seconds by summarizing per-patent rather than per-molecule to avoid redundant summarizations, and even further to 2.6 seconds by using only US and WO patents.
To consolidate labels by semantic meaning, the vocabulary was embedded with OpenAI’s textembedding-ada-002 and clustered to group labels by embedding similarity. DBSCAN clustering was performed on the embeddings with a sweeping epsilon ( Ester et al., 1996 ). The authors chose the epsilon for optimal clustering, set to be at the minimum number of clusters without quality degradation (e.g., avoiding the merging of antiviral, antibacterial, and antifungal). The optimal epsilon was 0.34 for the dataset herein, consolidating down from 29,854 to 20,030 labels. Representative labels for each cluster were created using gpt-3.5-turbo. The labels from a very large cluster of only IUPAC structural terms were removed to reduce non-generalizable labels. Labels appearing in <50 molecules were dropped to ensure sufficient predictive power. This resulted in a 99,454-molecule dataset with 1,543 unique functional labels, deemed the Chemical Function (CheF) dataset.
Text-based functional landscape graph.
Per-molecule label co-occurrence was counted across CheF. Counts were used as edge weights between label nodes to create a graph, visualized in Gephi using force atlas, nooverlap, and label adjust methods (default parameters) ( Bastian et al., 2009 ). Modularity-based community detection with 0.5 resolution resulted in 19 communities.
Coincidence of labels and their neighbors in structure space.
The 100K molecular fingerprints were t-SNE projected using sckit-learn, setting the perplexity parameter to 500. Molecules were colored if they contained a given label, see chefdb.app. The max fingerprint Tanimoto similarity from each molecule containing a given label to each molecule containing any of the 10 most commonly co-occurring labels was computed. The null co-occurrence was calculated by computing the max similarity from each molecule containing a given label to a random equal-sized set. Significance for each label was computed with an independent 2-sided t-test. The computed P values were then subjected to a false-discovery-rate (FDR) correction and the labels with P < 0.05 after FDR correction were considered significantly clustered ( Benjamini & Hochberg, 1995 ). Limiting max co-occurring label abundance to 1K molecules was necessary to avoid polluting the analysis, as hyper-abundant labels would force the Tanimoto similarity to 1.0.
Model training.
Several multi-label classification models were trained to predict the CheF from molecular representations. These models included logistic regression (C=0.001, max iter=1000), random forest classifier (n estimators=100, max depth=10), and a feedforward neural network (BCEWithLogitsLoss, layer sizes (512, 256), 5 epochs, 0.2 dropout, batch size 32, learning rate 0.001; 5-fold CV to determine params). A random 10% test set was held out from all model training. Macro average and individual label ROC-AUC and PR-AUC were calculated. | Results
Patents are an abundant source of highly specific chemical knowledge. It is plausible that a large dataset of patent-derived molecular function would capture most known functional relationships and could approximate the chemical function landscape. High-fidelity approximation of the chemical function landscape would implicitly capture complex physical and biological interactions given that chemical function arises from both a molecule’s structure and its interacting partners. This would allow for the prediction of functional labels for chemicals which is, to our knowledge, a novel task.
Chemical function dataset creation.
We set out to create a large-scale database of chemicals and their patent-derived molecular functionality. To do so, a random 100K molecules and their associated patents were chosen from the SureChEMBL database to create a Chemical Function (CheF) dataset ( Fig. S1 ) ( Papadatos et al., 2016 ). To ensure that patents were highly relevant to their respective molecule, only molecules with fewer than 10 patents were included in the random selection, reducing the number of available molecules by 12%. This was done to exclude over-patented molecules like penicillin with over 40,000 patents, most of which are irrelevant to its functionality.
For each molecule-associated patent in the CheF dataset, the patent title, abstract, and description were scraped from Google Scholar and cleaned. ChatGPT (gpt-3.5-turbo) was used to generate 1–3 functional labels describing the patented molecule given its unstructured patent data ( Fig. 1a ). The LLM-assisted function extraction method’s success was validated manually across 1,738 labels generated from a random 200 CheF molecules. Of these labels, 99.6% had correct syntax and 99.8% were relevant to their respective patent ( Table S1 ). 77.9% of the labels directly described the labeled molecule’s function. However, this increased to 98.2% when considering the function of the primary patented molecule, of which the labeled molecule is an intermediate ( Table S1 ).
The LLM-assisted method resulted in 104,607 functional labels for the 100K molecules. These were too many labels to yield any predictive power, so measures were taken to consolidate these labels into a concise vocabulary. The labels were cleaned, reducing the number of labels to 39,854, and further consolidated by embedding each label with a language model (OpenAI’s textembedding-ada-002) to group grammatically dissimilar yet semantically similar labels together. The embeddings were clustered with DBSCAN using a cutoff that minimized the number of clusters without cluster quality deterioration (e.g., avoiding the grouping of antiviral, antibacterial, and antifungal) ( Fig. S4 ). Each cluster was summarized with ChatGPT to obtain a single representative cluster label.
The embedding-based clustering and summarization process was validated across the 500 largest clusters. Of these, 99.2% contained semantically common elements and 97.6% of the cluster summarizations were accurate and representative of their constituent labels ( Table S2 ). These labels were mapped back to the CheF dataset, resulting in 19,616 labels ( Fig. 1b ). To ensure adequate predictive power, labels appearing in less than 50 molecules were dropped. The final CheF dataset consisted of 99,454 molecules and their 1,543 descriptive functional labels ( Fig. 1 , Table S3 ).
Functional labels map to natural clusters in chemical structure space.
Molecular function nominally arises directly from structure, and thus any successful dataset of functional labels should cluster in structural space. This hypothesis was based in part on the observation that chemical function is often retained despite minor structural modifications ( Maggiora et al., 2014 ; Patterson et al., 1996 ). And due to molecules and their derivatives frequently being patented together, structurally similar molecules should be annotated with similar patent-derived functions. This rationale generally holds, but exceptions include stereoisomers with different functions (e.g. as for thalidomide) and distinct structures sharing the same function (e.g. as for beta-lactam antibiotics and tetracyclines).
To evaluate this hypothesis, we embedded the CheF dataset in structure space by converting the molecules to molecular fingerprints (binary vectors representing a molecule’s substructures), visualized with t-distributed Stochastic Neighbor Embedding (t-SNE) ( Fig. 2 ). Then, to determine if the CheF functional labels clustered in this structural space, the maximum fingerprint Tanimoto similarity was computed between the fingerprint vectors of each molecule containing a given label; this approach provides a measure of structural similarity between molecules that have the same functional label ( Fig. 2 ) ( Bajusz et al., 2015 ). This value was compared to the maximum similarity computed from a random equal-sized set of molecules to determine significance. Remarkably, 1,192 of the 1,543 labels were found to cluster significantly in structural space (independent t-tests per label, false-discovery rate of 5%). To give an idea of the meaning of this correlation, inherent clustering was visualized for the labels ‘hcv’ (hepatitis C virus), ‘electroluminescence’, ‘serotonin’, and ‘5-ht’ (5-hydroxytryptamine, the chemical name for serotonin) ( Fig. 2 ). For the label ‘electroluminescence’ there was one large cluster containing almost only highly conjugated molecules ( Fig. 2c ). For ‘hcv’, there were multiple distinct communities representing antivirals targeting different mechanisms of HCV replication. Clusters were observed for NS5A inhibitors, NS3 macrocyclic and peptidomimetic protease inhibitors, and nucleoside NS5B polymerase inhibitors ( Fig. 2a , S5 ). The observed clustering of functional labels in structure space provided evidence that the CheF dataset labels had accurately captured structure-function relationships, validating our initial hypothesis.
Label co-occurrences reveal the text-based chemical function landscape.
Patents contain joint contextual information on the application, structure, and mechanism of a given compound. We attempted to determine the extent to which the CheF dataset implicitly captured this joint semantic context by assessing the graph of co-occurring functional labels ( Fig. 3 ). Each node in the graph represents a CheF functional label, and their relative positioning indicates the frequency of co-occurrence between labels, with labels that co-occur more frequently placed closer together. To prevent the visual overrepresentation of extremely common labels (i.e., inhibitor, cancer, kinase), each node’s size was scaled based on its connectivity instead of the frequency of co-occurrence.
Modularity-based community detection isolates tightly interconnected groups within a graph, distinguishing them from the rest of the graph. This method was applied to the label co-occurrence graph, with the resulting clusters summarized with GPT-4 into representative labels for unbiased semantic categorization ( Table S4 , S5 , S6 ). The authors curated the summarized labels for validity and found them representative of the constituent labels; these were then further consolidated for succinct representation of the semantic categorization ( Table S4 ). This revealed a semantic structure in the co-occurrence graph, where distinct communities such as ‘Electronic, Photochemical, & Stability’ and ‘Antiviral & Cancer’ could be observed ( Fig. 3 , Tables S4 , S5 , S6 ). Within communities, the fine-grained semantic structure also appeared to be coherent. For example, in the local neighborhood around ‘hcv’ the labels ‘antiviral’, ‘ns’ (nonstructural), ‘hbv’ (hepatitis B virus), ‘hepatitis’, ‘replication’, and ‘protease’ were found, all of which are known to be semantically relevant to hepatitis C virus ( Fig. 3 ). The graph of patent-derived molecular functions is a visual representation of the text-based chemical function landscape, and represents a potentially valuable resource for linguistic evaluation of chemical function and ultimately drug discovery.
Coherence of the text-based chemical function landscape in chemical structure space.
To assess how well text-based functional relationships align with structural relationships, the overlap between the molecules of a given label and those of its 10 most commonly co-occurring labels was calculated ( Fig. 4 ). This was achieved by computing the maximum fingerprint Tanimoto similarity from each molecule containing a given label to each molecule containing any of the 10 most commonly co-occurring labels (with <1,000 total abundance). This value was compared to the maximum similarity computed from each molecule containing a given label to a random equal-sized set of molecules to determine significance. This comparison indicated that molecules containing the 10 most commonly co-occurring labels were closer to the given label’s molecules in structure space than a random set for 1,540 of the 1,543 labels (independent t-tests per label, false-discovery rate of 5%), meaning that text-based functional relationships align with structural relationships ( Fig. 4 ). With the discovery of semantically structured communities, above, this suggests that users can move between labels to identify new compounds and vice versa to assess a compound’s function.
Functional label-guided drug discovery.
To employ the text-based chemical function landscape for drug discovery, multi-label classification models were trained on CheF to predict functional labels from molecular fingerprints ( Table S7 ). The best performing model was a logistic regression model on molecular fingerprints with positive predictive power for 1,532/1,543 labels and >0.90 ROC-AUC for 458/1,543 labels ( Fig. 5a ).
This model can thus be used to comprehensively annotate chemical function, even when existing annotations are fragmented or incomplete. As an example, for a known hepatitis C antiviral the model strongly predicted ‘antiviral’, ‘hcv’, ‘ns’ (nonstructural) (94%, 93%, 70% respectively) while predicting ‘protease’ and ‘polymerase’ with low confidence (0.02%, 0.00% respectively) ( Fig. 5b ). The low-confidence ‘protease’ and ‘polymerase’ predictions suggested that the likely target of this drug was the nonstructural NS5A protein, rather than the NS2/3 proteases or NS5B polymerase, a hypothesis that has been validated outside of patents in the scientific literature ( Ascher et al., 2014 ).
The ability to comprehensively predict functional profiles allows for the discovery of new drugs. For example, the label ‘serotonin’ was used to query the test set predictions, and a ranked list of the 10 molecules most highly predicted for ‘serotonin’ were obtained ( Fig. 5c ). All ten of these were patented in relation to serotonin: 8 were serotonin receptor ligands (5-HT1, 5-HT2, 5-HT6) and 2 were serotonin reuptake inhibitors. Similarly, the synonymous label ‘5-ht’ was used as the query and the top 10 molecules were again obtained ( Fig. 5d ). Of these, seven were patented in relation to serotonin (5-HT1, 5-HT2, 5-HT6), four of which were also found in the aforementioned ‘serotonin’ search. The remaining three molecules were patented without reference to the serotonin receptor, but were instead patented for depressant, anti-anxiety, and memory dysfunction relieving effects, all of which have associations with serotonin and its receptor. The identification of known serotonin receptor ligands, together with the overlapping results across synonymous labels, provides an internal validation of the model. Additionally, these search results suggest experiments in which the “mispredicted” molecules may bind to serotonin receptors or otherwise be synergistic with the function of serotonin, thereby demonstrating the practical utility of moving with facility between chemicals and their functions.
To examine the best model’s capability in drug repurposing, functional labels were predicted for 3,242 Stage-4 FDA approved drugs ( Fig. S7 ) ( Ochoa et al., 2021 ). Of the 16 drugs most highly predicted for ‘hcv’, 15 were approved Hepatitis C Virus (HCV) antivirals. Many of the mispredictions in the top 50 were directly relevant to HCV treatment including 8 antivirals and 8 polymerase inhibitors. The remaining mispredictions included 3 ACE inhibitors and 2 BTK inhibitors, both of which are peripherally associated with HCV through liver fibrosis mitigation and HCV reactivation, respectively ( Corey et al., 2009 ; Mustafayev & Torres, 2022 ). Beyond showing its power, this example suggests that functional label-guided drug discovery may serve as a useful paradigm for rapid antiviral repurposing to mitigate future pandemics. | Discussion
While in silico drug discovery often proceeds through structural and empirical methods such as protein-ligand docking, receptor binding affinity prediction, and pharmacophore design, we set out to investigate the practicality of orthogonal methods that leverage the extensive corpus of chemical literature. To do so, we developed an LLM- and embedding-based method to create a Chemical Function (CheF) dataset of 100K molecules and their 631K patent-derived functional labels. Over 78% of the functional labels corresponded to distinct clusters in chemical structure space, indicating congruence between chemical structures and individual text-derived functional labels. Moreover, there was a semantically coherent text-based chemical function landscape intrinsic to the dataset that was found to correspond with broad fields of functionality. Finally, it was found that the relationships in the text-based chemical function landscape mapped with high fidelity to chemical structure space (99.8% of labels), indicating approximation to the actual chemical function landscape.
To leverage the chemical function landscape for drug discovery, several models were trained and benchmarked on the CheF dataset to predict functional labels from molecular fingerprints ( Table. S7 ). The top-performing model was utilized for practical applications such as unveiling an undisclosed drug mechanism, identifying novel drug candidates, and mining FDA-approved drugs for repurposing and combination therapy uses. Since the CheF dataset is scalable to the entire 32M+ molecule database, we anticipate that many of these predictions will only get better into the future.
The CheF dataset inherently exhibits a bias towards patented molecules. This implies sparse representation of chemicals with high utility but low patentability, and allows for false functional relationships to arise from prophetic claims. Additionally, by restricting the dataset to chemicals with <10 patents, it neglects important well-studied molecules like Penicillin. The inclusion of over-patented chemicals could be accomplished by using only the most abundant k terms for a given molecule, using a fine-tuned LLM to only summarize patents relevant to molecular function (ignoring irrelevant patents on applications like medical devices), or employing other data sources like PubChem or PubMed to fill in these gaps. Increasing label quality and ignoring extraneous claims might be achieved through an LLM fine-tuned on high-quality examples. Further quality increases may result from integration of well-documented chemical-gene and chemical-disease relationships into CheF.
The analysis herein suggests that a sufficiently large chemical function dataset contains a text-based function landscape that approximates the actual chemical function landscape. Further, we demonstrate one of the first examples of functional label-guided drug discovery, made possible utilizing state-of-the-art advances in machine learning. Models in this paradigm have the potential to automatically annotate chemical function, examine non-obvious features of drugs such as side effects, and down-select candidates for high-throughput screening. Moving between textual and physical spaces represents a promising paradigm for drug discovery in the age of machine learning. | The fundamental goal of small molecule discovery is to generate chemicals with target functionality. While this often proceeds through structure-based methods, we set out to investigate the practicality of orthogonal methods that leverage the extensive corpus of chemical literature. We hypothesize that a sufficiently large text-derived chemical function dataset would mirror the actual landscape of chemical functionality. Such a landscape would implicitly capture complex physical and biological interactions given that chemical function arises from both a molecule’s structure and its interacting partners. To evaluate this hypothesis, we built a Chemical Function (CheF) dataset of patent-derived functional labels. This dataset, comprising 631K molecule-function pairs, was created using an LLM- and embedding-based method to obtain functional labels for approximately 100K molecules from their corresponding 188K unique patents. We carry out a series of analyses demonstrating that the CheF dataset contains a semantically coherent textual representation of the functional landscape congruent with chemical structural relationships, thus approximating the actual chemical function landscape. We then demonstrate that this text-based functional landscape can be leveraged to identify drugs with target functionality using a model able to predict functional profiles from structure alone. We believe that functional label-guided molecular discovery may serve as an orthogonal approach to traditional structure-based methods in the pursuit of designing novel functional molecules. | Related Work
Labeled chemical datasets.
Chemicals are complex interacting entities, and there are many labels that can be associated with a given chemical. One class is specific protein binding, commonly used to train chemical representation models ( Mysinger et al., 2012 ; Wu et al., 2018 ). Datasets linking chemicals to their functionality have emerged in recent years ( Edwards et al., 2021 ; Huang et al., 2023 ; Degtyarenko et al., 2007 ; Wishart et al., 2023 ). These datasets were largely compiled from existing databases of well-studied chemicals, limiting their generalizability ( Li et al., 2016 ; Fu et al., 2015 ). The CheF dataset developed here aims to improve upon these existing datasets by automatically sourcing molecular function from patents to create a high-quality molecular function dataset, ultimately capable of scaling to the entire SureChEMBL database of 32M+ patent-associated molecules ( Papadatos et al., 2016 ). To our knowledge, the full scale-up would create not just the largest chemical function dataset, but rather the largest labeled chemical dataset of any kind. Its high coverage of chemical space means that the CheF dataset, in its current and future iterations, may serve as a benchmark for the global evaluation of chemical representation models.
Patent-based molecular data mining and prediction.
Building chemical datasets often involves extracting chemical identities, reaction schemes, quantitative drug properties, and chemical-disease relationships ( Senger et al., 2015 ; Papadatos et al., 2016 ; He et al., 2021 ; Sun et al., 2021 ; Magariños et al., 2023 ; Zhai et al., 2021 ; Li et al., 2016 ). We recently used an LLM to extract patent-derived information to help evaluate functional relevance of results from a machine learning-based chemical similarity search ( Kosonocky et al., 2023 ). We expand upon previous works through the large-scale LLM-based extraction of broad chemical functionality from a corpus of patent literature. This is a task that LLMs were not explicitly trained to do, and we provide validation results for this approach.
Recent work also focused on molecular generation from chemical subspaces derived from patents containing specific functional keywords, for example, all molecules relating to tyrosine kinase inhibitor activity ( Subramanian et al., 2023 ). This allows for a model that can generate potential tyrosine kinase inhibitors but would need to be retrained to predict molecules of a different functional label. In our work, we focus on label classification rather than molecular generation. Further, we integrate multiple functional labels for any given molecule, allowing us to broadly infer molecular functionality given structure. Generative models could be trained on the described dataset, allowing for label-guided molecular generation without re-training for each label.
Chemical-to-textual translation.
Recent work investigated the translation of molecules to descriptive definitions and vice versa ( Edwards et al., 2021 ; 2022 ; Su et al., 2022 ). The translation between language and chemical representations is promising as it utilizes chemical relationships implicit in text descriptions. However, decoder-based molecule-text translation models appear to us unlikely to be utilized for novel drug discovery tasks as experimentalists desire strongly deterministic results, reported prediction confidences, and alternative prediction hypotheses. To satisfy these constraints, we opted for a discriminative structure-to-function model.
Many existing chemical-to-text translation models have been trained on datasets containing structural nomenclature and irrelevant words mixed with desirable functional information ( Edwards et al., 2021 ; Degtyarenko et al., 2007 ). Inclusion of structural nomenclature causes inflated prediction metrics for functional annotation or molecular generation tasks, as structure-to-name and name-to-structure is simpler than structure-to-function and function-to-structure. The irrelevant words may cause artifacts during the decoding process depending on the prompt, skewing results in ways irrelevant to the task. In our work, we ensured our model utilized only chemical structure, and not structural nomenclature, when predicting molecular function to avoid data leakage.
Supplementary Material | Acknowledgments
The authors acknowledge the Biomedical Research Computing Facility at The University of Texas at Austin for providing high-performance computing resources. We would also like to thank AMD for the donation of critical hardware and support resources from its HPC Fund. This work was supported by the Welch Foundation (F-1654 to A.D.E., F-1515 to E.M.M.), the Blumberg Centennial Professorship in Molecular Evolution, the Reeder Centennial Fellowship in Systematic and Evolutionary Biology at The University of Texas at Austin, and the NIH (R35 GM122480 to E.M.M.). The authors would like to thank Aaron L. Feller and Charlie D. Johnson for useful criticism and discussion during the development of this project. | CC BY | no | 2024-01-16 23:35:06 | ArXiv. 2023 Dec 18;:arXiv:2309.08765v2 | oa_package/81/1d/PMC10775343.tar.gz |
|
PMC10775347 | 38196746 | Introduction
Myelination is increasingly recognized as an important dynamic biomarker of brain development, aging, and various neurological conditions, including but not limited to multiple sclerosis, leukodystrophies, and neurodegenerative disorders ( 1 – 3 ). These conditions can lead to alterations in the biophysical properties observed in MRI scans ( 4 – 6 ). To quantitatively assess myelin content of brain, various quantitative MRI techniques ( 2 , 5 , 7 – 10 ) have been proposed based on different tissue properties, such as longitudinal relaxation time (T 1 ) ( 6 , 11 ), and transversal relaxation time (T 2 ) ( 12 , 13 ), T 2 * ( 14 , 15 ), T 1 -weighted/T 2 -weighted ( 16 ), diffusion ( 17 ), and magnetization transfer ( 18 – 20 ). For instance, T 1 or R 1 (reciprocal of T 1 ) has been used to predict the myelination process in cortex ( 4 , 6 , 21 ), as the decrease in T 1 correlates with more myelination. Compared to T 1 which can be impacted by contributions from both myelin content and other confounds, myelin water fraction (MWF) ( 2 , 14 , 22 ) that probes short T 1 and T 2 signal contributions of water molecules trapped within the myelin sheaths has been shown to be more specific for predicting myelination changes and characterizing the myelin content in the brain ( 2 , 23 ).
To estimate MWF, conventional MWF mapping relies on a multi-echo spin-echo or gradient-echo sequence ( 22 ) and multi-compartment fitting of the exponential decay signal to extract the shorter relaxation time of myelin water ( 14 , 22 , 24 ). However, the acquisition time of the conventional method is long (e.g., 1 minute per 1 mm slice ( 22 )), and the fitting process is ill-conditioned and susceptible to noise. To improve MWF mapping, Vi sualization of S hort T ransverse rel a xation time component (ViSTa) technique ( 25 ) was proposed for direct visualization of myelin water signal. This technique employs a specifically configured double inversion-recovery sequence that suppresses the long T 1 component while preserving the signals from the short T 1 components of myelin water. This allows for direct and precise imaging of myelin water, enabling accurate assessment of myelin content without fitting. However, ViSTa faces challenges such as decreased SNR due to signal suppression and a long acquisition time (40 seconds per slice with 1mm 2 even with 9x parallel imaging acceleration achieved via advanced wave-CAIPI techniques ( 26 )).
Magnetic resonance fingerprinting (MRF) ( 27 ) is a rapid quantitative imaging technique that simultaneously estimates multiple tissue parameters and has garnered significant interest as a diagnostic tool in various diseases ( 28 – 31 ). This technique was initially proposed using a 2D acquisition. Since then, numerous studies have focused on advancing MRF to achieve shorter scan times, higher resolutions, improved accuracy and reduced variability. To enable fast high-resolution MRF for whole-brain quantitative imaging, 3D stack-of-spiral ( 32 , 33 ) and spiral-projection-imaging trajectory ( 34 ) have been developed. These advancements allow for whole-brain 3D MRF at 1 mm isotropic resolution in ~6 minutes. On the reconstruction side, various methods such as parallel imaging ( 32 , 35 ), low rank/subspace model ( 36 – 39 ) and deep learning methods ( 40 – 42 ) have been incorporated into MRF reconstruction to enhance image quality. Recently, MWF mapping has been conducted using modified MRF sequences that aims to achieve better signal separability between myelin water and other tissue types ( 5 , 43 ). However, the extraction of MWF still relies on multi-compartment fitting, which can pose challenges in accurately separating different components, particularly in highly undersampled MRF data with low signal-to-noise ratio (SNR). The multicompartment fitting is ill-posed and typically requires additional assumptions and/or priors to obtain good results, which could create bias or artifacts. This is an open area of research, where a number of innovative reconstruction algorithms ( 44 – 47 ) are being developed to tackle this issue.
In this work, we have developed a novel 3D ViSTa-MRF acquisition and reconstruction framework that integrates the ViSTa technique into MRF. This approach achieves a remarkable acceleration of MWF mapping by 30x compared to the gold standard ViSTa approach (1.3 seconds per slice with 1mm 3 resolution) while also enabling better SNR and simultaneous estimation of T 1 , T 2 , and PD. With the double inversion prep in 3D ViSTa-MRF, we can directly visualize MWF image once the time-series data is reconstructed, without the need for multicompartment modeling. We demonstrate that the proposed method achieves high-fidelity whole-brain MWF/T 1 /T 2 /PD maps at 1mm and 0.66 mm-isotropic resolution in 5 minutes and 15.2 minutes, respectively. Furthermore, we propose a 5-minute whole-brain 1mm-iso ViSTa-MRF protocol to quantitively investigate brain development in early childhood. This work is an extension of our earlier work, which was reported as conference abstract and oral presentation in the Annual Meetings of International Society of Magnetic Resonance in Medicine (ISMRM) 2022 ( 48 ). | Methods
ViSTa-MRF sequence
Figure 1(A) shows the sequence diagram of the ViSTa-MRF acquisition, where each acquisition group consists of multiple ViSTa-preparation blocks and one MRF-block. A water-exciting rectangular (WE-Rect) hard pulse ( 49 ) was employed for signal excitation, where the RF duration was set to 2.38ms at 3T so that the first zero-crossing of its sinc-shaped frequency response is at the main fat-frequency (440 Hz). In each ViSTa-block, specifically configured double-inversion-recovery pulses were applied (TI 1 =560ms, TI 2 =220ms), and the first subsequent signal time-point was referred to as the “ViSTa signal”. Twenty consecutive time points were acquired within each ViSTa-block to facilitate joint spatial-temporal subspace reconstruction. Through extended-phase-graph (EPG) ( 50 ) simulation, Figure 1(B) shows that the myelin-water signal was preserved in the ViSTa signal, while the white-matter (WM), gray-matter (GM) and Cerebrospinal fluid (CSF) were suppressed, which enabled direct myelin-water imaging. At the end of the ViSTa-block, a BIR-4 90° saturation-pulse with a spoiler gradient was applied to suppress flow-in CSF and vessel signals. A waiting time (TD) of 380ms was selected to achieve a steady-state longitudinal magnetization of the short-T 1 signal for the next ViSTa preparation. To enhance the encoding of the short-T1 signal, the sequence repeated the ViSTa-block eight times, followed by an MRF block, resulting in a total acquisition time of 19 seconds for each acquisition group. Increasing the number of ViSTa blocks yielded more ViSTa signal encodings but extended acquisition time. To establish the optimal number of required ViSTa blocks, we undertook empirical tests using varying quantities of ViSTa blocks and assessed the reconstructed ViSTa image quality to guarantee its likeness to the standard ViSTa sequence. Through our experiments, we identified that employing 8 ViSTa blocks struck the ideal balance between ViSTa signal quality and acquisition duration. In the last ViSTa-block, the saturation pulse and TD time were omitted to ensure a smooth signal transition between the ViSTa block and the MRF block. This step was taken to standardize the signal across all eight ViSTa blocks and to prepare for obtaining the ViSTa signal, which was not required in the last block. After the ViSTa-blocks, 500-time-point FISP-MRF ( 51 ) block were acquired. Unlike conventional MRF acquisition, where the inversion-recovery pulse was placed at the beginning of the MRF block. In this approach, we introduced a 1-second rest time before applying the inversion-recovery pulse at the 200th time-point of the MRF block. This design allows for the recovery of longitudinal magnetization after ViSTa preparations. Between the acquisition groups, a BIR-4 90°-saturation-pulse with a TD of 380ms was used to suppress flow-in CSF and vessel signals and achieved steady-state longitudinal-magnetization of short-T 1 signal.
To improve the SNR and the estimation accuracy of myelin water (T 1 /T 2 = 120/20ms), WM (T 1 /T 2 = 750/60ms), and GM (T 1 /T 2 = 1300/75ms), we employed the Cramér–Rao lower bound (CRLB) of T 1 /T 2 values to optimize the flip angle (FA) train in the ViSTa-MRF sequence ( 52 , 53 ). Figure 1(C) shows the ViSTa-MRF signal-curves with good signal-separability between different tissue-types. To achieve efficient sampling in 3D k-space, the ViSTa-MRF sequence employed the optimized 3D tiny-golden-angle-shuffling (TGAS) spiral-projection acquisition ( 39 ) with 220×220×220 mm 3 whole-brain coverage for incoherent undersampling. Different numbers of acquisition groups were employed for different resolution cases, where the 3D TGAS were designed to rotate around three axes, as shown in Figure 2(A) .
Synergistic subspace reconstruction
We proposed a memory-efficient fast reconstruction that leverages spatial-temporal subspace reconstruction ( 36 – 38 , 54 ) with optimized k-space preconditioning ( 55 ). The ViSTa-MRF dictionary, accounting for B 1 + variations (B 1 + range [0.70:0.05:1.20]), was generated using EPG, and the first 14 principal components were chosen as the temporal bases ( Figure 2(A) ). Compared to our previous study ( 39 ), where only 5 bases were selected in the subspace reconstruction for a conventional MRF sequence, in ViSTa-MRF, 14 bases were used in the present ViSTa-MRF study to better represent the myelin-water signal. This change was due to the sequence design of ViSTa-MRF, which introduced more signal variations through CRLB optimization. To determine the number of bases needed for the reconstruction, we applied two conditions: (i) Ensuring that the number of bases is sufficient to represent 99% of the signal in the dictionary. (ii) Conducting empirical tests with different numbers of bases and examining the quality of ViSTa signal to ensure that it resembles the results obtained from the standard ViSTa sequence. The ViSTa-MRF time-series was projected onto the subspace, resulting in 14 coefficient maps based on the selected temporal bases. The ViSTa-MRF time-series x is expressed as x = Φc , where Φ are the temporal bases , c are the coefficient maps. Figure 2(B) illustrates the flowchart of the subspace reconstruction with locally low-rank constraints, which could be described as: where S contains coil sensitivities, F is the NUFFT operator, M is the undersampling-pattern, λ is the regularization-parameters. We implemented a novel algorithm in SigPy( 56 ) to solve Equation [1] that combined polynomial preconditioned FISTA reconstruction with Pipe-Menon density-compensation ( 55 ) and basis-balancing ( 57 ) to reduce artifacts and accelerate the subspace reconstruction. The off-line reconstruction package is available at https://github.com/SophieSchau/MRF_demo_ISMRM2022 . With this reconstruction approach, the whole-brain 14-bases coefficient maps (e.g., 220 x ×220 y ×220 z ×14 bases for 1mm-iso MRF data) can be efficiently reconstructed on a GPU with 24 GB VRAM in 45 minutes (~90s per iteration, 30 iterations with polynomial preconditioning). This provides a significant improvement compared to reconstruction performed using e.g., the popular BART software( 58 ) which requires over 1TB of RAM and reconstruction time of over 4 hours on our Linux server for this same problem. This advancement allows for fast reconstruction of large spatial-temporal data, providing more efficient processing. As shown in Figure 2(B) and (C) , the reconstructed coefficient maps ( c ) were then used to generate the time-series with voxel-by-voxel B 1 + correction for estimating T 1 /T 2 /PD maps, while the quantitative MWF map was derived from the reconstructed first time-point ViSTa image I(ViSTa) and the PD map I(PD): where S(myelin_water) is the B 1 + corrected, EPG-simulated signal intensity from the dictionary using nominal T 1 and T 2 values of myelin-water (T 1 /T 2 =120/20ms). The S(myelin_water) signal is normalized to ‘1’, where the term ‘1’ denotes the maximum signal tipped down by a 90-degree excitation pulse from a fully recovered Mz.
Using spatiotemporal subspace reconstruction, the entire time-series was jointly reconstructed, allowing us to reconstruct the first time-point image (ViSTa signal) and leverage the encoding and SNR-averaging from all other time points. The implementation of the ViSTa-MRF acquisition and joint subspace reconstruction enabled us to directly visualize myelin-water images once the time-series data were reconstructed. It is important to highlight that the myelin-water signal evolution throughout the MRF sequence (as shown in Figure 1(C) ) is markedly different from that of WM and GM signals throughout the entire sequence. This unique behavior of the myelin-water signal, along with the subspace reconstruction, effectively utilized the signal and spatial encoding throughout the MRF sequence to differentiate the myelin-water signal from other tissue types and to create a high SNR first ViSTa timepoint data. This capability would not have been achievable with, for example, sliding window NUFFT reconstruction ( 59 ).
By utilizing the reconstructed quantitative T 1 , T 2 , and PD maps, we can synthesize multiple contrast-weighted images using Bloch equation that provide robust contrasts while significantly reducing scan time and improving motion-robustness during examinations.
In-vivo acquisition and reconstruction
We implemented 1.0 mm and 0.66 mm isotropic whole-brain ViSTa-MRF sequences on one 3T GE Premier scanner and one ultra-high-performance (UHP) scanner (GE Healthcare, Madison, WI, USA), as well as two 3T Siemens Prisma scanners and one Vida scanner (Siemens Healthineers, Erlangen, Germany). A total of twenty healthy adults (age: 23.4 ±2.3 year-old) participated in the study, with approval from the institutional review board. Written informed consent was obtained from each participant. FOV: 220×220×220 mm 3 , TR/TE=12/1.8ms with a 6.8ms spiral-out readout and a 1.2ms rewinder for both 1mm and 0.66mm cases. The maximum gradient strength was 40mT/m, and the maximum slew rate was 100T/m/s for 1mm resolution. For the 0.66mm resolution, the maximum gradient strength was 60mT/m, and the maximum slew rate was 160T/m/s. Sixteen and forty-eight acquisition-groups with eight ViSTa-blocks were acquired for 1-mm and 0.66-mm cases, respectively, to achieve sufficient spatiotemporal encoding. This resulted in scan times of 19s×16=5 minutes for the 1mm-iso and 19s×48=15.2 minutes for the 0.66mm-iso datasets. FOV-matched low-resolution (3.4mm×3.4mm×5.0mm) B 0 maps were obtained using multi-echo gradient-echo sequence. To mitigate B 0 -induced image blurring from the spiral readout, a multi-frequency interpolation (MFI) technique ( 39 , 60 ) was implemented in subspace reconstruction with conjugate phase demodulation. To achieve robustness to B 1 + inhomogeneity, B 1 + variations were simulated into the dictionary and incorporated into the subspace reconstruction. Bloch-Siegert method was utilized to obtain FOV-matched low-resolution B 1 + maps (3.4mm×3.4mm×5.0mm). These low-resolution B 1 + maps were then linearly interpolated to match the matrix size of the high-resolution ViSTa-MRF results, as the B 1 + maps are spatial smoothing. As the ViSTa-MRF dictionary included B 1 + effects, the obtained B 1 + maps were used to select a sub-dictionary for matching at each pixel, thus corrected for B 1 + inhomogeneity related T 1 and T 2 bias in ViSTa-MRF ( 61 ). The quantitative maps with and without B 1 + corrections were compared.
The 20 adult datasets were acquired from five scanners. To test the cross-scanner comparability, we selected the data from one Prisma scanner as the reference. To calculate the cross-scanner mean T 1 , T 2 , and MWF values, 32 representative WM and GM regions, along with 5 representative MWF regions were chosen. Using these mean values, the reproducibility coefficient (RPC) and Bland–Altman plots for T 1 , T 2 , and MWF were computed.
In order to assess the performance of the CRLB-optimized protocol, we acquired the ViSTa-MRF sequence using both the original FAs and the CRLB-optimized FAs. The original FAs and the CRLB-optimized FAs are depicted in Figure 3(A) . To evaluate the fat artifacts, the WE-Rect pulse is compared with the normal non-selective Fermi pulse we used in our previous study. To validate the accuracy of myelin estimation of the proposed ViSTa-MRF method, the proposed method is compared with the standard 2D fully sampled ViSTa sequence with multi-shot spiral readout. The in-plane resolution of the ViSTa is 1mm, slice thickness 5mm. 48 spiral interleaves are used to fully sample one slice, results in the total acquisition time of 48× (TI 1 560ms+TI 2 220ms +TD 380ms) = 56s per slice. The acquisition was not accelerated with parallel imaging as the reconstructed image with full sampling was already at low SNR. This is much slower than ViSTa-MRF, as the proposed 1mm-iso ViSTa-MRF could acquire 220 slices in 5 minutes (1.3s per slice). To validate the accuracy of T 1 and T 2 estimation of the ViSTa-MRF method, the proposed method is compared with the standard 3D MRF sequence at 1mm isotropic resolution.
In addition to validating our approach on healthy adult volunteers, data were also acquired on two infants to quantitatively investigate infant brain development using MWF, T 1 and T 2 maps. A 5-minute whole-brain ViSTa-MRF and a T 1 -MPRAGE (magnetization-prepared rapid gradient echo) with 1.0 mm-isotropic resolution were utilized to acquire data on a 4-month and a 12-month infant. The protocol parameters of the T 1 -MPRAGE sequence for the baby scans are: TR/TE= 6.9/2.3ms, inversion time = 400ms, flip-angle=11°, image resolution 1×1×1mm 3 , FOV= 220×220 ×220mm 3 , total acquisition time: 6 minutes 20 seconds. The experiments were performed on a 3T GE UHP scanner with the approval of the institutional review board. Written informed consent was obtained from each participant’s parents. Scans were scheduled approximately 1 hour after the infant’s bedtime, for a 2-hour window to allow sufficient time for the infant to fall asleep and/or restart scans if the infant awakens. MR-compatible headphones for infants were used to ensure appropriate noise protection during the scans.
Ex-vivo scans
Additional data were also acquired on ex-vivo brain samples where long scan time is feasible to investigate capability of ViSTa-MRF for investigation of mesoscale quantitative tissue parameter mapping. To validate the image quality of the proposed method, a coronal slab from a 5-month-old post-mortem brain and a left occipital lobe sample from a 69-year-old post-mortem brain were acquired with ViSTa-MRF at 0.50mm-isotropic resolution: FOV:160×160×160mm 3 , 180 acquisition-groups were acquired with total acquisition time of 19s×180=57minutes. For the ex-vivo scans, a lower acceleration rate than feasible was used to ensure high SNR. | Results
Figure 3(B) shows reconstructed 1mm-iso T 2 and MWF maps acquired from a healthy adult using the original FAs and the CRLB-optimized FAs. The red arrows indicate the CRLB-optimized results achieve higher SNR in MWF maps. The zoom-in figures demonstrate that the CRLB-optimized ViSTa images exhibit higher SNR and better visualization of detailed structures in the cerebellum than the ViSTa-MRF images with the original FAs.
Figure 4(A) shows a representative time-resolved 1mm-iso MRF-volume after subspace reconstruction using the original fermi-pulse and the WE-Rect pulse. As yellow arrows indicate, the fat artifacts are much mitigated using the WE-Rect pulse. Figure 4(B) shows the comparison between a fully sampled standard 2D-ViSTa sequence (56s/slice) and ViSTa-MRF acquisition (1.3s/slice) with subspace reconstruction, where the results are highly consistent, demonstrating the feasibility in leveraging the joint-spatiotemporal encoding information for highly accelerated ViSTa-MRF data. Figure 4(C) shows T 1 , T 2 and MWF maps with and without B 1 + correction as well as corresponding B 1 + maps. With B 1 + correction, the estimated T 2 and MWF maps are more uniform compared to the results without B 1 + correction.
Figure S1 shows a representative slice of T 1 and T 2 maps estimated from standard MRF and ViSTa-MRF methods. The comparison between the 1mm ViSTa-MRF and the standard MRF sequences demonstrates that the quantitative T 1 and T 2 maps estimated from ViSTa-MRF are highly consistent with the standard MRF sequence.
Figure 5(A) shows the 5-minute whole-brain 1mm-iso T 1 , T 2 and MWF maps in coronal views, where the MWF values for a healthy adult shown in Figure 5(B) from ViSTa-MRF across four representative WM-regions: genu corpus callosum, forceps minor, forceps major and corpus callosum splenium, are consistent with the literature results ( 25 ). The MWF comparison between literature values and our proposed ViSTa-MRF method is shown in Table S1 . The region of interest (ROI) size is 5×5 for the four WM regions.
Figure 6 displays whole-brain 660μm T 1 , T 2 , PD, ViSTa, and MWF maps obtained within a 15-minute scan time. The zoom-in figures highlight the enhanced ability to visualize subtle brain structures, such as the caudate nucleus (red arrows in Figure 6 ). When compared to the 1mm results, the higher resolution of the 660μm dataset provides improved visualization of the periventricular space (red arrows in Figure S2 ).
Figure S3 shows the cross-scanner comparability of ViSTa-MRF data acquired from 5 scanners and the Bland–Altman plots for T 1 , T 2 , and MWF. The results demonstrate robust ViSTa-MRF results across different scanners.
Figure 7(A) shows the estimated 1mm-iso whole-brain T 1 , T 2 , and MWF maps of 4-months, 12-months babies and a reference 22-year-old adult. As shown in Figure 7(B) , a custom-built tight-fitting 32-channel baby coil ( 62 ) is used to acquire datasets for improved SNR. The estimated T 1 and T 2 values of white-matter and gray-matter decrease while the estimated MWF of white-matter increases with brain development, indicating brain dynamic process of myelination, as shown in Figure 7(C) .
By leveraging the fast acquisition of ViSTa-MRF, we were able to synthesize various contrast-weighted images across the whole-brain at high resolution from the 12-month-old infant data, including T 1 -MPRAGE, T 1 - and T 2 -weighted, T 2 -FLAIR (Fluid attenuated inversion recovery) and DIR (double inversion recovery) images, as shown in Figure 8(A) . The parameters for these sequences were optimized to account for the typical T 1 and T 2 of various tissues in infant brain. This eliminates the need for time-consuming structural scans in infant studies and provides an alternative when motion artifacts compromise the quality of conventional contrast-weighted images. Figure 8(B) provides a visual comparison between an acquired T 1 -MPRAGE image and a synthesized image. The acquired T 1 -MPRAGE scan at 0.9mm, which requires a 12-minute acquisition, was adversely affected by subject motion, resulting in image blurring and compromised image quality, as highlighted by the red arrow in the zoom-in figure in Figure 8(B) . In contrast, our fast ViSTa-MRF acquisition produced a synthesized T 1 -MPRAGE image with improved quality and CNR, where the white and gray-matter contrast was maximized through synthetic sequence parameters selection using Bloch simulation.
Figure 9 shows the 0.50mm-iso ViSTa-MRF results of the post-mortem brain samples. Figure 9 (A) shows quantitative T 1 , T 2 , MWF and PD maps obtained from an ex-vivo 5-month infant brain. The zoom-in figure in Figure 9 (A) reveals decreased T 1 , T 2 , PD, and increased MWF values (indicated by red arrows) in the lines of Baillarger within cortical layers, reflecting the higher myelination level in the cortex. Figure 9(B) shows ViSTa-MRF results of the 69-year-old post-mortem brain sample. The decreased T 1 and PD and increased MWF values (indicated by black arrows) were detected in the lines of Gennari in V1 region, reflecting the high myelination in Layer IV of the cortex, which are consistent with the high-resolution T 2 -weighted reference images. As red arrow indicated in Figure 9(B) , the “dark dots” in MWF and increased T 1 and T 2 values imply the de-myelination in this region in the aging brain. These findings align with results from other studies ( 63 – 65 ). | Discussion
In this work, we developed a 3D ViSTa-MRF sequence with a CRLB-optimized FAs and a memory-efficient subspace reconstruction to achieve high-resolution MWF, T 1 , T 2 , and PD mapping in a single scan. Compared to the accurate yet time-consuming standard ViSTa sequence, the proposed fast ViSTa-MRF approach provides consistent MWF values at 30x faster scan time with higher SNR. We demonstrate that the proposed method achieves high-fidelity whole-brain MWF, T 1 , T 2 , and PD maps at 1mm- and 0.66mm-isotropic resolution on 3T clinical scanners in 5 minutes and 15.2 minutes, respectively. Furthermore, our preliminary results of the 5-minute infant scans demonstrate the feasibility in using this technology for investigating brain development in early childhood.
Previous studies ( 28 , 44 – 46 ) have demonstrated that MRF emerges as a promising multi-contrast acquisition strategy capable of estimating multi-compartment quantitative tissue parameters within a shorter duration. However, it has been recognized that conventional MRF techniques may have limited sensitivity to tissue compartments characterized by short T 1 values, such as the myelin water component in brain tissue ( 66 ). To address this limitation, several methods have been proposed to modify the MRF sequence. For example, the incorporation of ultra-short-TE acquisition ( 12 ) has been used to extract and differentiate the ultra-short T 2 component of pure myelin from the long T 2 component of myelin-water signals ( 67 ). Additionally, a multi-inversion preparation with short inversion times has been added to the MRF sequence to improve the sensitivity of the short T 1 myelin-water signal ( 5 , 43 ). These modifications aim to enhance the accuracy and specificity of MRF in quantifying myelin-related tissue property. However, these approaches still have limitations, as they either require multi-component fitting (e.g., non-negative least squares with joint sparse constraints ( 45 , 46 )) or rely on strong assumptions of predefined compartments with fixed T 1 and T 2 values ( 5 ). These assumptions can be sensitive to noise or may introduce biases in MWF quantification. In contrast, our proposed ViSTa-MRF method with CRLB optimization demonstrates promising sensitivity to short T 1 components, eliminating the need for multi-compartment modeling or predefined dictionary fitting with fixed T 1 and T 2 values.
In standard ViSTa acquisition with a TR of ~1.3s, 0.8s is used for the inversion preps, 10ms for short readout due to the fast T 2 * decay of myelin-water signal and 0.4s for recovery. This is highly inefficient. In this work, we incorporate ViSTa into MRF, to improve the speed and accuracy of MWF-mapping. The idea of ViSTa-MRF is to combine original ViSTa sequence with MRF, where the first time point provides pure myelin water signal (ViSTa signal) while the subsequent MRF time-point provide signal from multi-tissue compartment, with each compartment having a CRLB-optimized distinct/non-overlapping signal evolution. Using spatiotemporal subspace reconstruction, the whole time-series (19s per acquisition group) is jointly reconstructed to enable the reconstruction of the first time-point (ViSTa signal) image to leverage the encoding and SNR-averaging from all the other time points. With the implementation of the ViSTa-MRF acquisition and joint subspace reconstruction, we gain the ability to directly visualize myelin-water images once the time-series data is reconstructed. Additionally, we can achieve high-resolution whole-brain quantitative T 1 , T 2 , and PD maps, just like the original MRF method. Moreover, unlike the standard ViSTa method, which necessitates an additional gradient echo image to calculate final MWF maps, the ViSTa-MRF approach allows for the calculation of MWF using the MRF-estimated PD maps without the need for any additional scans.
The ViSTa signal has been shown to primarily consist of water components with short relaxation times, and research indicates that this signal predominantly originates from myelin water ( 25 ). However, it is important to consider other influencing factors such as cross relaxation, myelin water exchange, and magnetic transfer, which could potentially lead to an underestimation of MWF values ( 26 , 68 ). Due to these complexities, the MWF maps derived from ViSTa are also termed as apparent MWF maps ( 23 ). These maps continue to serve as indicators of myelin content and are employed for both intra- and inter-subject comparisons, as well as cross-sectional and longitudinal studies ( 69 , 70 ). In our ViSTa-MRF simulation, we simplified the model to a single pool without the magnetization transfer effect, which could lead to a potential underestimation of T 1 and T 2 values.
In this work, we employed the CRLB for the optimization of the ViSTa-MRF sequence. The optimized flip angle for the first time-point was determined to be 38°, which differs from the standard ViSTa sequence that uses a 90° excitation. This variation is attributed to the differences in the acquisition methods. While the standard ViSTa sequence utilizes a single readout to maximize the signal with 90° excitation, the ViSTa-MRF acquisition employs continuous readouts to leverage the myelin-water component and achieve SNR-averaging across all time-points with the subspace reconstruction. To strike a balance between high SNR in the first time-point and distinct signal evaluation of the myelin-water component in the continuous readouts, CRLB optimization was employed for the flip-angle train of ViSTa-MRF. Consequently, our in vivo comparison revealed improved SNR in the ViSTa image when compared to the non-optimized protocol. Furthermore, we also applied the ViSTa-MRF sequence for infant scans to quantitatively investigate infant brain development. Given the rapid changes in relaxation times during the development of infant brains, our future work will involve calculating the CRLB for age-specific T 1 and T 2 values of myelin-water, white matter, and gray matter. This optimization will enable us to tailor the infant scan protocol to different ages of infants, ensuring accurate and sensitive assessments of brain development.
In this study, we also successfully applied the proposed ViSTa-MRF method for ex-vivo scans. The obtained results from the 5-month-old and 69-year-old brain samples provided valuable insights into the myelination process during early brain development and the demyelination process in the aging brain. As part of our future work, we plan to further investigate the relationship between the estimated MWF and myelin-stained ex-vivo slabs at different cortical depths. This quantitative analysis will enable a comprehensive comparison and validation of our ViSTa-MRF-based myelin water measurements with histological myelin-stained samples.
In developing infant brains, rapid changes in relaxation times present challenges in acquiring sufficient contrast in T 1 -weighted images for cortical segmentation and surface-based analysis ( 4 , 6 , 71 , 72 ). Our ViSTa-MRF technique offers an effective solution to overcome this challenge. By utilizing the quantitative T 1 , T 2 , and PD maps, we can synthesize T 1 -weighted images that provide robust contrasts while significantly reducing scan time and improving motion-robustness during infant examinations, which is beneficial to generate infant substructure segmentation map. This improvement in image quality holds great promise for more accurate infant brain segmentation ( 73 ). Furthermore, the multi-contrast-weighted images synthesis eliminates the need for time-consuming structural scans in infant studies and provides an alternative when motion artifacts compromise the quality of conventional contrast-weighted images. Currently, we are utilizing quantitative maps and conventional Bloch simulations to synthesize multi-contrast weighted images, which may not fully capture magnetization transfer effect in the synthesized images ( 74 ). To address this limitation, we plan to explore the use of deep-learning based methods ( 75 – 77 ) for direct image synthesis from the raw k-space data in our future work, which could achieve faster and more accurate image synthesis. | Conclusion
In this work, we have developed a 3D ViSTa-MRF technique that combines the accurate but time-consuming ViSTa technique with MR fingerprinting for whole-brain multi-parametric MRI. This approach enables us to obtain whole-brain 1mm and 660μm isotropic myelin-water fraction, quantitative T 1 , T 2 and PD maps in 5 and 15 minutes, respectively. These advancements provide us with great potential to quantitatively investigate infant brain development and older adult brain degeneration. | Purpose:
This study aims to develop a high-resolution whole-brain multi-parametric quantitative MRI approach for simultaneous mapping of myelin-water fraction (MWF), T 1 , T 2 , and proton-density (PD), all within a clinically feasible scan time.
Methods:
We developed 3D ViSTa-MRF, which combined Vi sualization of S hort T ransverse rel a xation time component (ViSTa) technique with MR Fingerprinting (MRF), to achieve high-fidelity whole-brain MWF and T 1 /T 2 /PD mapping on a clinical 3T scanner. To achieve fast acquisition and memory-efficient reconstruction, the ViSTa-MRF sequence leverages an optimized 3D tiny-golden-angle-shuffling spiral-projection acquisition and joint spatial-temporal subspace reconstruction with optimized preconditioning algorithm. With the proposed ViSTa-MRF approach, high-fidelity direct MWF mapping was achieved without a need for multi-compartment fitting that could introduce bias and/or noise from additional assumptions or priors.
Results:
The in-vivo results demonstrate the effectiveness of the proposed acquisition and reconstruction framework to provide fast multi-parametric mapping with high SNR and good quality. The in-vivo results of 1mm- and 0.66mm-iso datasets indicate that the MWF values measured by the proposed method are consistent with standard ViSTa results that are 30x slower with lower SNR. Furthermore, we applied the proposed method to enable 5-minute whole-brain 1mm-iso assessment of MWF and T 1 /T 2 /PD mappings for infant brain development and for post-mortem brain samples.
Conclusions:
In this work, we have developed a 3D ViSTa-MRF technique that enables the acquisition of whole-brain MWF, quantitative T 1 , T 2 , and PD maps at 1mm and 0.66mm isotropic resolution in 5 and 15 minutes, respectively. This advancement allows for quantitative investigations of myelination changes in the brain. | Supplementary Material | Acknowledgement
The authors would like to thank Dr. Vaidehi Subhash Natu, Sarah Shi Tung, Clara Maria Bacmeister and Bella Fascendini from Stanford University for their invaluable assistance in preparing the experiments. In the preparation of this manuscript, the OpenAI’s Large Language Model (LLM), specifically the GPT-4 architecture, was used for grammar check. This work is supported in part by NIH research grants: R01MH116173, R01EB019437, U01EB025162, P41EB030006.
Data and Code availability Statement
The demonstration ViSTa-MRF reconstruction scripts are available online at: https://github.com/SophieSchau/MRF_demo_ISMRM2022 .
The ViSTa-MRF sequence and raw k-space datasets are available upon request. | CC BY | no | 2024-01-16 23:35:07 | ArXiv. 2023 Dec 21;:arXiv:2312.13523v1 | oa_package/f0/f8/PMC10775347.tar.gz |
PMC10775349 | 38196750 | Introduction
Moderate and high doses of ionising radiation are well established causes of most types of cancer 1 , 2 . There is emerging evidence, particularly for leukaemia and thyroid cancer, of risk at low dose (<0.1 Gy) radiation 3 – 6 (roughly 50 times the dose from background radiation in a year). For most other cancer endpoints it is necessary to assess risks via extrapolation from groups exposed at moderate and high levels of dose 7 – 13 . Such extrapolations, which are dependent on knowing the true dose-response relationship, as inferred from some reference moderate/high-dose data (very often the Japanese atomic bomb survivors), are subject to some uncertainty, not least that induced by systematic and random dosimetric errors that may be present in that moderate/high-dose data 1 , 14 . Extensive biostatistical research over the last 30 years have done much to develop understanding of this issue 15 – 30 and in particular the role played by various types of dose measurement error 31 .
One of the most commonly used methods of correction for dose error is regression calibration 31 . A modification of the regression calibration method has been very recently proposed which is particularly suited to studies in which there is a substantial amount of shared error, and in which there may also be curvature in the true dose response 32 . This so-called extended regression calibration (ERC) method can be used in settings where there is a mixture of Berkson and classical error 32 . In fits to synthetic datasets in which there is substantial upward curvature in the true dose response, and varying (and sometimes substantial) amounts of classical and Berkson error, the ERC method generally outperformed both standard regression calibration, Monte Carlo maximum likelihood (MCML) and unadjusted regression, particularly with respect to the coverage probabilities of the quadratic coefficient, and for larger magnitudes of the Berkson error, whether this is shared or unshared 32 .
A Bayesian model averaging (BMA) method has been also recently proposed, the so-called 2-dimensional Monte Carlo with Bayesian model averaging (2DMC with BMA) method 28 , which has been used in fits to radiation thyroid nodule data 33 . The so-called frequentist model averaging (FMA) model has also been recently proposed, although only fitted to simulated data 34 . In the present paper we shall assess the performance of a variant implementation of the 2DMC with BMA method, which is more closely aligned with standard implementations of BMA 35 , and FMA against ERC, making also comparisons with other methods of correction for dose error using simulated data. The simulated data used is exactly as in the previous report 32 . | Methods
Synthetic data used for assessing corrections for dose error
The methods and data used closely parallel those of the previous paper 32 . We used the publicly available version of the leukaemia and lymphoma data of Hsu et al 36 to guide construction of a synthetic dataset. We assumed a composite Berkson-classical error model in which the true dose and the surrogate dose to individual (in dose group ) in simulation are given by:
The variables are independent identically distributed random variables. The factors are the central estimates of dose, as given previously 32 . The factors and ensure that the distributions given by ( 1 ) and ( 2 ) have theoretical mean that coincides with the central estimates .
We generated a number of different versions of the dose data, with logarithmic SD taking values of 0.2 (20%) or 0.5 (50%). This individual dose data was then used to simulate the distribution of cancers for each of simulated datasets, indexed by , using a model in which the assumed probability of being a case for individual is given by: the scaling constant being chosen for each simulation (but not for the Bayesian model fits) to make these sum to 1. As previously, we assumed coefficients , derived from leukaemia data of Hsu et al 36 .
A total of samples were taken of each type of dose, as given by expressions ( 1 ) and ( 2 ). A total of simulations of these dose+cancer ensembles were used to fit models and evaluate fitted model means and coverage probability. Having derived synthetic individual level data, for the purposes of model fitting, for all models except MCML and 2DMC with BMA, the data were then collapsed (summing cases, averaging doses) into the 5 dose groups used previously 32 . Poisson linear relative risk generalised linear models 37 were fitted to this grouped data, with rates given by expression ( 3 ), using as offsets the previously-specified number per group 32 . Models were fitted using five separate methods (unadjusted, regression calibration, ERC, MCML, 2DMC and BMA). For ERC and the other methods previously used the methods of deriving doses and model fitting were as in our earlier paper 32 .
We used a BMA method somewhat analogous to the 2DMC with BMA method of Kwon et al 28 , using the full set of mean true doses per group as previously generated for MCML, the mean doses per group for each simulation being given by group means of the samples generated by expression ( 1 ), averaged over the dose samples. The model was fitted using Bayesian Markov Chain Monte Carlo (MCMC) methods. Associated with the dose vector is a vector of probabilities which is generated using variables , so that:
This is therefore quite close to the method proposed by Hoeting et al 35 , and somewhat distinct from the formulation of 2DMC with BMA proposed by Kwon el al 28 , as we discuss at greater length below. For this reason we shall describe our own method as quasi-2DMC with BMA. All main model parameters had normal priors, with mean 0 and standard deviation (SD) 1000. The Metropolis-Hastings algorithm was used to generate samples from the posterior distribution. Random normal proposal distributions were assumed for all variables, with SD of 0.2 for and 1 for , and SD 2 for all . The were proposed in blocks of 10. Two separate chains were used, in order to compute the Brooks-Gelman-Rubin (BGR) convergence statistic 38 , 39 . The first 1000 simulations were discarded, and a further 1000 simulations taken for sampling. The proposal SDs and number of burn-in sample were chosen to give mean BGR statistics (over the 500 simulated datasets) that were in all cases less than 1.03 and acceptance probabilities of about 30% for the main model parameters , suggesting good mixture and likelihood of chain convergence. For the ERC model confidence intervals were (as previously) derived using the profile likelihood 37 and for the quasi-2DMC with BMA model Bayesian uncertainty intervals were derived.
The FMA model of Kwon et al 34 was also fitted. For each of dose vectors fits model ( 3 ) using Poisson regression (via maximum likelihood) and the AIC, , computed, as well as the central estimate and profile likelihood 95% confidence intervals for each coefficient, and ; from these were derived the estimated standard deviation, via and . For each fit simulations were taken from the respective normal distributions and , and each such sample given an AIC-derived weight via . The central estimate for each coefficient was taken as the AIC-derived-weighted sum of these samples, and 95% CI estimated from the 2.5% and 97.5% centiles of the AIC-derived-weighted samples. A variety of in the range 100–1000 were used, yielding very similar results. We also tried using asymmetric confidence intervals, employing and to separately generate the samples above and below the central estimate , and likewise for the coefficient. However, this generally yielded badly biased estimates of both coefficients, because of occasional samples in which one or other CI was very large.
The Fortran 95–2003 program used to generate these datasets and perform Poisson and Bayesian MCMC model fitting, and the relevant steering files employed to control this program are given in online Appendix A. Using the mean coefficients for each model and error scenario over the 500 simulated datasets, , the percentage mean bias in predicted relative risk (RR) is calculated, via:
This was evaluated for two values of predicted dose, and .
Data availability statement
The datasets generated and analysed in the current study are available by running the Fortran 95/2003 program fitter_shared_error_simulation_reg_cal_Bayes_FMA .for, given in the online web repository, with any of the five steering input files given there. All are described in Appendix A. The datasets are temporarily stored in computer memory, and the program uses them for fitting the Poisson models described in the Methods section. | Results
As shown in Table 1 , the coverage probabilities of the ERC method for the linear coefficient are near the desired 95% level, irrespective of the magnitudes of assumed Berkson error, whether shared or unshared. However, the ERC method yields coverage probabilities that are somewhat too low when shared and unshared Berkson errors are both large (with logarithmic SD=50%), although otherwise it performs well ( Table 1 ). It should be noted that classical error will have no effect on either of these models, as its only effect is on the unadjusted regression model (via sampling of the surrogate dose), and so the effect is not shown.
By contrast the coverage probabilities of both the linear coefficient and the quadratic coefficient for the quasi-2DMC with BMA method are generally much too low, and when shared Berkson error is large (50%) the coverage probabilities do not exceed 5% ( Table 1 ). The coverage for the FMA method is generally better, and for the coefficient does not depart too markedly from the desired 95%; however, the coverage of the coefficient tends to be too high, for any non-zero level of Berkson error, whether shared or unshared ( Table 1 ).
Table 2 shows the coefficient mean values, averaged over all 500 simulations. A notable feature is that for larger values of Berkson error, the linear coefficient for quasi-2DMC with BMA is substantially overestimated, and the quadratic dose coefficient substantially underestimated, both by factors of about 10. For ERC the estimates of the quadratic coefficient are upwardly biased, but not by such large amounts. For FMA both coefficients have pronounced upward bias, particularly for large shared Berkson error (50%) ( Table 2 ).
Table 3 shows that the bias in RR for ERC evaluated either at 0.1 Gy or 1 Gy does not exceed 20%. Regression calibration performs somewhat worse, with bias ~40% when shared and unshared Berkson error are large (50%) for predictions at 1 Gy, although otherwise under 25%. For all but the smallest shared and unshared Berkson errors (both 0% or 20%) MCML has bias about ~25–45% at 1 Gy, although under 3% otherwise. Quasi-2DMC with BMA performs somewhat worse, with bias of ~30–45% when Berkson errors are large, whatever the evaluated dose. Unadjusted regression yields the most severe bias, which exceeds 50% in many cases for predictions at 1 Gy, and when shared or unshared classical errors are large (50%) often exceeds 100% ( Table 3 ). Bias for FMA is generally moderate to severe, and particularly bad (>100%) when shared Berkson error is large (50%) ( Table 3 ). | Discussion
We have demonstrated that the quasi-2DMC with BMA method performs poorly, with coverage probabilities both for the linear and quadratic dose coefficients that are under 5% when the magnitude of shared Berkson errors is large (50%) ( Table 1 ). This method also produces substantially biased (by a factor of 10) estimates of both the linear and quadratic coefficients, with the linear coefficient overestimated and the quadratic coefficient underestimated ( Table 2 ). FMA performs generally better, although the coverage probability for the quadratic coefficient is uniformly too high ( Table 1 ). However both linear and quadratic coefficients have pronounced upward bias, particularly when Berkson error is large (50%) ( Table 2 ). By comparison the ERC method yields coverage probabilities that are too low when shared and unshared Berkson errors are both large (50%), although otherwise it performs well, and coverage is generally better than for quasi-2DMC with BMA or FMA. As shown previously it generally outperforms all other methods (regression calibration, MCML, unadjusted regression) 32 . The upward bias in estimates of the coefficient and the downward bias in the estimates of the coefficient, at least for larger magnitudes of error ( Table 2 ) largely explains the poor coverage of quasi-2DMC with BMA in these cases ( Table 1 ). The bias of the predicted RR at a variety of doses is generally smallest for ERC, and largest (apart from unadjusted regression) for quasi-2DMC with BMA and for FMA, with standard regression calibration and MCML exhibiting bias in predicted RR generally somewhat intermediate between the other two methods ( Table 3 ).
As noted above, the form of the quasi-2DMC with BMA model that we fit differs slightly from that employed by Kwon et al 28 . The standard formulation of BMA, as given by Hoeting et al 35 , and which we employ, is based on the posterior probability: where are given by Eq. (4) . We fitted via successive application of Metropolis-Hastings samplers to (a) sample conditionally the model dose response parameters conditionally on and (b) sample conditionally the dose vector probability parameters which determine conditionally on . Kwon et al 28 did this slightly differently, sampling (a) the dose response parameters conditional on , (b) sampling of (via a multinomial distribution) conditional on and and (c) sampling of the parameters (via a Dirichlet distribution) conditional on and . These two methods should be approximately equivalent, although the second is considerably more computationally challenging, and may be the reason why Kwon et al 28 resorted to use of an approximate Monte Carlo sampler, the so-called stochastic approximation Monte Carlo (SAMC) method of Liang et al 40 , in order to get their method to work. Unfortunately Kwon et al do not provide enough information to infer the precise form of SAMC that was used by them 28 , and for that reason we have adopted this alternative, which is in any case possibly more computationally efficient. It is possible that the SAMC implementation used by Kwon et al 28 may behave differently from the more standard implementation of BMA given here. Kwon et al 28 report results of a simulation study that tested the 2DMC with BMA method against what they term “conventional regression”, which may have been regression calibration. They did not assess performance against MCML. Kwon et al 28 report generally better performance of 2DMC with BMA against the regression calibration alternative. Kwon et al 34 tested FMA against the 2DMC method and against the so-called corrected information matrix (CIM) method 41 and observed similar performance, in particular adequate coverage, of all three methods, although narrower CI were produced by FMA compared with CIM under a number of scenarios. However, in all cases only a linear model was tested 34 . Set against this, Stram et al reported results of a simulation study 42 which suggested that the 2DMC with BMA method will produce substantially upwardly biased estimates of risk, also that the coverage may be poor, somewhat confirming our own findings; although we do not always find upward bias, the coverage of both regression coefficients for quasi-2DMC with BMA is poor for larger values of shared Berkson error ( Tables 1 , 2 ).
Dose error in radiation studies is unavoidable, even in experimental settings. It is particularly common in epidemiological studies, in particular those of occupationally exposed groups, where shared errors, resulting from group assignments of dose, dosimetry standardizations or possible variability resulting from application of, for example, biokinetic or environmental or biokinetic models result in certain shared unknown (and variable) parameters between individuals or groups. There have been extensive assessments of uncertainties in dose in these settings 43 – 45 . Methods for taking account of such uncertainties cannot always correct for them, but they at least enable error adjustment (e.g. to CI) to be made 31 . As previously discussed 32 the defects in the standard type of regression calibration are well known, in particular that the method can break down when dose error is substantial 31 , as it is in many of our scenarios. It also fails to take account of shared errors. Perhaps because of this a number of methods have been recently developed that take shared error into account, in particular the 2DMC and BMA method 28 and the CIM method 41 . The CIM method only applies to situations where there is pure Berkson error. Both 2DMC with BMA and CIM have been applied in number of settings, the former to analysis of thyroid nodules in nuclear weapons test exposed individuals 33 , and the latter to assessment of lung cancer risk in Russian Mayak nuclear workers 46 and cataract risk in the US Radiologic Technologists 47 . In principle the simulation extrapolation (SIMEX) method 48 can be applied in situations where there is shared (possibly combined with unshared) classical error, where the magnitudes of shared and unshared error are known. However, this was not part of the original formulation of SIMEX 48 . Possibly because of the restrictions on error structure and its extreme computational demands SIMEX has only rarely been used in radiation settings 27 , 49 . | Conclusions
We have demonstrated that the quasi-2DMC with BMA method performs poorly, with coverage probabilities both for the linear and quadratic dose coefficients that are under 5% when the magnitude of shared Berkson error is moderate to large ( Table 1 ). This method also produces substantially biased (by a factor of 10) estimates of both the linear and quadratic coefficients, with the linear coefficient overestimated and the quadratic coefficient underestimated ( Table 2 ). FMA performs generally better, although the coverage probability for the quadratic coefficient is uniformly too high ( Table 1 ). However both linear and quadratic coefficients using FMA have pronounced upward bias, particularly when Berkson error is large (50%) ( Table 2 ). By comparison the recently developed ERC method 32 yields coverage probabilities that are too low when shared and unshared Berkson errors are both large (50%), although otherwise it performs well, and coverage is generally better than for quasi-2DMC with BMA or FMA. The bias of the predicted RR at a variety of doses is generally smallest for ERC, and largest for quasi-2DMC with BMA and FMA, with standard regression calibration and MCML exhibiting bias in predicted RR generally somewhat intermediate between the other two methods ( Table 3 ). | Contributorship
M.P.L. formulated the analysis, wrote and ran the analysis code and wrote the first draft of the paper. L.B.Z. and N.H. contributed to extensive rewrites of the subsequent drafts of the paper. All authors reviewed the manuscript and approved its submission.
For many cancer sites low-dose risks are not known and must be extrapolated from those observed in groups exposed at much higher levels of dose. Measurement error can substantially alter the dose-response shape and hence the extrapolated risk. Even in studies with direct measurement of low-dose exposures measurement error could be substantial in relation to the size of the dose estimates and thereby distort population risk estimates. Recently, there has been considerable attention paid to methods of dealing with shared errors, which are common in many datasets, and particularly important in occupational and environmental settings.
In this paper we test Bayesian model averaging (BMA) and frequentist model averaging (FMA) methods, the first of these similar to the so-called Bayesian two-dimensional Monte Carlo (2DMC) method, and both fairly recently proposed, against a very newly proposed modification of the regression calibration method, which is particularly suited to studies in which there is a substantial amount of shared error, and in which there may also be curvature in the true dose response. The quasi-2DMC with BMA method performs poorly, with coverage probabilities both for the linear and quadratic dose coefficients that are under 5% when the magnitude of shared Berkson error is large (50%). The method also produces substantially biased (by a factor of 10) estimates of both the linear and quadratic coefficients, with the linear coefficient overestimated and the quadratic coefficient underestimated. FMA performs generally better, although the coverage probability for the quadratic coefficient is uniformly too high. However both linear and quadratic coefficients have pronounced upward bias, particularly when Berkson error is large. By comparison the extended regression calibration method yields coverage probabilities that are too low when shared and unshared Berkson errors are both large (50%), although otherwise it performs well, and coverage is generally better than the quasi-2DMC with BMA or FMA methods. The bias of the predicted relative risk at a variety of doses is generally smallest for extended regression calibration, and largest for the quasi-2DMC with BMA and FMA methods (apart from unadjusted regression), with standard regression calibration and Monte Carlo maximum likelihood exhibiting bias in predicted relative risk generally somewhat intermediate between the other two methods. | Acknowledgements
The authors are grateful for the detailed and helpful comments of Dr Jay Lubin. The Intramural Research Program of the National Institutes of Health, the National Cancer Institute, Division of Cancer Epidemiology and Genetics supported the work of MPL. The work of LBZ was supported by National Cancer Institute and National Institutes of Health (Grant No. R01CA197422). The funders had no role in considering the study design or in the collection, analysis, interpretation of data, writing of the report, or decision to submit the article for publication. | CC BY | no | 2024-01-16 23:35:06 | ArXiv. 2023 Dec 22;:arXiv:2312.02215v2 | oa_package/0b/d6/PMC10775349.tar.gz |
|
PMC10775354 | 38196749 | INTRODUCTION: FMRI BRAIN DATA AND EFFECTIVE CONNECTIVITY
Functional Magnetic Resonance Imaging (fMRI) offers the highest-currently-available spatial resolution for three-dimensional images of real-time functional activity across the entire human brain [ Glasser et al., 2016 ]. For this reason, enormous resources have been spent to collect fMRI data from hundreds of thousands of individuals for research purposes [ Volkow et al., 2018 , Elam et al., 2021 , Alfaro-Almagro et al., 2018 ]. This data is collected and analyzed with the purpose of answering a large variety of scientific questions, such as understanding drivers of adolescent substance use initiation [ Volkow et al., 2018 ].
In this paper we focus on questions related to how activity in different areas of the brain may causally influence activity in other areas of the brain [ Friston, 2009 ]. This can serve a variety of purposes, such as guiding medical interventions like non-invasive brain stimulation (NIBS) [ Horn and Fox, 2020 ]. For example, deep-brain stimulation targets the subthalamic nucleus (STN), however current NIBS technologies cannot directly manipulate activity in STN [ Horn et al., 2017 ]. The area could potentially be indirectly manipulated through other brain areas, however this requires learning, either for the population or for each individual, which other brain areas have the greatest causal influence on STN. Standard fMRI analysis unfortunately eschews estimating causal effects.
Current standard practice in fMRI analysis is to describe the connections between brain areas with empirical correlations [ Biswal et al., 1995 ]. Partial correlation methods such as glasso [ Friedman et al., 2008 ] are also used but are less common [ Marrelec et al., 2006 ]. The connections learned by such methods are called “functional connections”. This term is used to distinguish them from “effective connections” where one area is described as causally influencing another [ Friston, 2009 , Reid et al., 2019 , Pearl, 2000 ]. Despite the overtly non-causal nature of functional connectivity, it is still used by the larger fMRI research community to identify brain areas that should be targeted with interventions, and more generally to describe how the brain functions. A small but growing community of fMRI researchers are adopting causal discovery analysis (CDA) instead [ Spirtes et al., 2000 , Camchong et al., 2023 , Sanchez-Romero and Cole, 2021 , Rawls et al., 2022 ]. These papers have demonstrated that CDA is capable of recovering information from fMRI data above and beyond what is possible with traditional fMRI connectivity methods, however the details of their approaches vary substantially.
In any project involving CDA, the researcher faces many choices (degrees of freedom). Using CDA requires navigating a wide range of algorithms, properties, assumptions, and settings. Previous fMRI CDA studies have often differed in their specifics, but share a number of common strategies. We focus this paper’s discussion on the decisions made during one project: the development and application of the Greedy Adjacencies with Non-Gaussian Orientations (GANGO) method to data from the Human Connectome Project (HCP) [ Rawls et al., 2022 ].
The aim of Rawls et al. [2022 ] was to describe the structure of effective connectivity in the brain commonly found in healthy individuals while at rest, a.k.a. the resting-state causal connectome. This model could then be used to identify causal connectome alterations that may be responsible for psychopathology in people suffering from mental health disorders, as well as to predict the clinical severity of lesions in different brain areas in terms of its impact on the larger causal connectome. To construct this model, however, numerous decisions were made regarding both how to perform the CDA itself and how to clean and process the raw fMRI data. The need for such decisions comes from several challenges that are universal to CDA fMRI studies, which we describe next. | CDA AND EFFECTIVE CONNECTIVITY METHODS FOR FMRI
When selecting a CDA method for fMRI analysis, the large number of algorithms developed over the last thirty years can be intimidating. However, the majority of CDA methods do not meet the minimum requirements for analyzing parcellated fMRI data. For example, many CDA methods do not scale to problems with hundreds of variables (C6). Moreover, methods that rely on the cross-temporal relationships in the data, such as Granger causality [ Granger, 1969 , Friston et al., 2014 ], will not produce meaningful results due to undersampling [ Barnett and Seth, 2017 ] (C3). In practice, researchers usually treat fMRI data as if it were independent and identically distributed while applying CDA methods to avoid this issue. Below, we review popular CDA methods that are appropriate for analyzing parcellated fMRI data. Table 1 compares these methods relative to the 9 challenges.
Greedy Equivalent Search (GES)
Greedy Equivalent Search (GES) is a two-phase greedy search algorithm that moves between equivalence classes of Directed Acyclic Graphs (DAGs) by adding or removing conditional independence relations. Fast Greedy Equivalent Search (fGES) is an efficient implementation of GES capable of scaling up to a million variables (C6). It does not have any special preprocessing requirements (C1), and has good model performance on limited samples (C9). However, this algorithm is intended for sparsely connected networks and suffers in terms of performance and scalability on densely connected problems (C7) [ Chickering, 2002 , Ramsey et al., 2017 ].
Greedy Relaxations of the Sparsest Permutation (GRaSP)
Greedy Relaxations of the Sparsest Permutation (GRaSP) is a hierarchy of greedy search algorithms that move between variable orderings. For brevity, we will use the GRaSP acronym to refer to the most general algorithm in the hierarchy. Starting from a random order, GRaSP repeatedly iterates over pairs of variables that are adjacent in the DAG constructed by applying the Grow Shrink (GS) [ Margaritis and Thrun, 1999 ] in a manner consistent with the order, modifying the order in a way consistent with flipping the corresponding edge in the DAG. GRaSP does not have any special preprocessing requirements (C1), and has good model performance on limited samples (C9). However, unlike fGES, GRaSP retains its good performance on high density models (C7) but only scales to one or two hundred variables (C6) [ Lam et al., 2022b ].
Direct Linear Non-Gaussian Acyclic Model (LiNGAM)
Direct Linear Non-Gaussian Acyclic Model (LiNGAM) is a greedy algorithm that constructs a variable ordering based on a pairwise orientation criterion [ Hyvärinen and Smith, 2013 ]. Starting from an empty list, the order is constructed by adding one variable at a time, the one that maximizes the pairwise orientation criterion, until a full ordering is constructed. Once the order is constructed, it is projected to a DAG. The pairwise orientation criterion uses measures non-Gaussianity so some preprocessing techniques cannot be used (C1). Moreover, LiNGAM cannot learn cycles (C2). That being said, the method scales fairly well and has not problem learning scale-free structures (C6), (C7), and (C9) [ Shimizu et al., 2011 ].
The next three methods broadly fall into the same class of algorithms. These approaches are two stepped approaches where the first step learns a graph using an existing CDA method and the second step augments and (re)orients that edges of the graph learned in the first step; this general approach was pioneered by Hoyer et al. [2008 ]. All three of these methods use non-Gaussianity and thus some preprocessing techniques cannot be used (C1). Moreover, the properties of these methods are impacted their algorithm choice in the first step, for example, if the chosen fails to scale in some aspect, then so will the overall method (C6 - C9).
Greedy Adjacencies with Non-Gaussian Orientations (GANGO)
Greedy Adjacencies with Non-Gaussian Orientations (GANGO) uses fGES [ Ramsey et al., 2017 ] as a first step in order to learn the adjacencies and then uses the RSkew pairwise orientation rule [ Hyvärinen and Smith, 2013 ], also referred to as robust skew, for orientations. This method allows for cycles (with the exception of two-cycles) and scales well [ Rawls et al., 2022 ].
Fast Adjacency Skewness (FASk)
Fast Adjacency Skewness (FASk) uses fast adjacency search (FAS), which is the adjacency phase of the PC algorithm, as a first step in order to learn the adjacencies and then uses a series of tests to add additional adjacencies, orient two-cycles, and orient directed edges. This method allows for cycles and scales well [ Sanchez-Romero et al., 2019 ].
Two-Step
Two-Step uses adaptive lasso or FAS as a first step in order to learn the adjacencies and then uses independent subspace analysis (ISA) or independent component analysis (ICA) if no latent confounders are identified. This method allows for cycles and latent confounding, and scales well [ Sanchez-Romero et al., 2019 ]. | RESULTS FROM THE HCP CASE STUDY
Here we briefly review the findings from the HCP case study by Rawls et al. [2022 ]. This study developed the GANGO causal connectivity method, which was applied to n=442 resting-state fMRI data sets. The connectomes produced were extremely sparse (2.25% edge density) compared to Pearson correlation connectomes, which are often thresholded to an edge density of 5–50%. Nevertheless, graphs produced by GANGO were fully connected in nearly all cases, which was not the case for standard Pearson correlation graphs.
The GANGO method produced graphs with a scale-free degree distribution. More specifically, the degree distributions were skewed by existence of hub nodes with very high connectivity (some with total degree exceeding 20). See Figure 3 . These hub nodes were disproportionately concentrated in brain networks tied to attention and executive control, while Pearson correlations instead emphasized hub connectivity of early sensory regions. Graphs produced by the GANGO method also show small-world connectivity, with global efficiency nearly as high as random graphs but local efficiency much higher than random graphs. Overall, this case study showed that a causal discovery algorithm specifically designed to meet the unique challenges of fMRI data recovers physiologically plausible connectomes with small-world and scale-free connectivity patterns characteristic of biological networks. | CONCLUSION
Here we have outlined nine challenges that will be faced by researchers attempting to use CDA for fMRI effective connectivity analysis, and presented a recent case study that attempted to resolve at least some of these challenges. We have also discussed challenges that remain following this case study, such as the continuing search for CDA methods that can discover densely connected graphs and hub nodes with especially high connectivity resulting from scale-free connectivity profiles.
In summary, there are a number of decisions faced by researchers who hope to use CDA for fMRI analysis. By openly discussing these researcher degrees-of-freedom and a recent attempt to resolve these decisions, we hope to foster continued interest in and engagement with the idea that CDA can provide a data-driven method for reconstructing human causal connectomes. | Author Contributions
All authors contributed to the ideas in this paper and contributed to its drafting. All authors have reviewed and edited the paper’s content for correctness.
Designing studies that apply causal discovery requires navigating many researcher degrees of freedom. This complexity is exacerbated when the study involves fMRI data. In this paper we (i) describe nine challenges that occur when applying causal discovery to fMRI data, (ii) discuss the space of decisions that need to be made, (iii) review how a recent case study made those decisions, (iv) and identify existing gaps that could potentially be solved by the development of new methods. Overall, causal discovery is a promising approach for analyzing fMRI data, and multiple successful applications have indicated that it is superior to traditional fMRI functional connectivity methods, but current causal discovery methods for fMRI leave room for improvement. | PRACTICAL CHALLENGES OF APPLYING CDA TO FMRI DATA
Any research study where CDA is applied to fMRI data will have to confront numerous study design challenges. Here we enumerate 9 challenges that apply to all CDA fMRI studies.
C1: Preprocessing.
Raw fMRI data contains numerous artifacts due to a wide variety of physical, biological, and measurement technology factors. For example, there are many artifacts resulting from the fact that the brain is not a rigid, stable object: head motion will obviously impact which locations in space within the scanner correspond to which parts of the brain, but many other less obvious factors such as blinking, swallowing, and changes in blood pressure due to heartbeats, all apply pressure to the brain and causes it to move and change shape. There is a large space of methods for cleaning and preprocessing fMRI data, and these can have a large impact on any fMRI analysis [ Parkes et al., 2018 , Botvinik-Nezer et al., 2020 ]. Further, there is an interaction between how the fMRI data is cleaned and preprocessed and what CDA methods are viable. Many popular fMRI preprocessing methods produce Gaussian data, but some popular causal discovery methods require the data to be non-Gaussian [ Ramsey et al., 2014 ]. As such, we can not choose the fMRI preprocessing method and CDA method independently of each other: they must be chosen jointly.
C2: Cycles.
Brains are known to contain both positive and negative feedback cycles [ Sanchez-Romero et al., 2019 , Garrido et al., 2007 ]. CDA methods that are capable of learning cyclic relationships will thus be preferable, ceteris paribus , to CDA methods that can not learn models with cycles. Further, the CDA methods that can accurately learn cycles primarily operate outside the space of Gaussian distributions. However, as already mentioned, many fMRI cleaning methods force the data to be Gaussian, thus making it essentially unusable for those methods.
C3: Undersampling.
The sampling rate of fMRI imaging is much slower than the rate at which neurons influence each other. That is, typical image acquisitions only sample the brain about every 1–2 seconds [ Darányi et al., 2021 ] and the Blood Oxygen Level Dependent (BOLD) response does not peak for approximately 5–7 seconds following activation of neurons [ Buckner, 1998 ]. Meanwhile, pyramidal neurons can fire up to 10 times per second and interneurons may fire as many as 100 times per second [ Csicsvari et al., 1999 ]. Some recent CDA research has focused on undersampled time series data [ Hyttinen et al., 2016 , 2017 , Cook et al., 2017 , Solovyeva et al., 2023 ], however the application of these approaches to parcellated fMRI data remains largely unexplored.
C4: Latents.
It is plausible that our measured variables are influenced by some unmeasured common causes, such as haptic or interoceptive feedback from the peripheral nervous system, or even inputs from small brain regions that are not included separately in parcellations but typically lumped together, such as the raphe nucleus (serotonin), the locus coeruleus (norepinephrine), and the ventral tegmental area (dopamine).
C5: Spatial smoothing.
fMRI is subject to poorly characterized spatial smoothing resulting from the scanner itself and also from standard preprocessing that typically includes spatial smoothing with a Gaussian kernel [ Mikl et al., 2008 ]. This induces correlations between nearby brain areas. Since these correlations are due only to the measurement and standard preprocessing technologies, they do not reflect causal processes inside the brain. However, current CDA methods will universally attempt to explain these correlations with conjectured causal mechanisms. Performing analysis at the level of parcellations of voxels — biologically interpretable spatially contiguous sets of voxels — is the typical approach that fMRI researchers use to ameliorate this problem.
C6: High Dimensionality.
At present, the smallest brain areas that are commonly used for full-brain connectivity analysis are parcellations [ Glasser et al., 2016 , Schaefer et al., 2018 ]. These are smaller than most Regions of Interest (ROIs) or other spatially defined “brain networks” (e.g. the “Default Mode Network”), but much larger than voxels. Individual parcels typically comprise 100–200 voxels or more [ Glasser et al., 2016 ]. Voxels are in turn much larger than neurons, containing around 1 million neurons in an 8 mm 3 voxel [ Ip and Bridge, 2021 ]. There are multiple different whole-brain parcellations, but they typically include hundreds of parcels. Some examples include the recent multimodal parcellation of the human cortex into 360 parcels by Glasser et al. [2016 ], and the multiscale parcellation of Schaefer and colleagues that includes between 100 and 1000 parcels [ Schaefer et al., 2018 ]. The analysis algorithm must therefore scale to hundreds or several hundreds of variables, which excludes many CDA methods.
C7: High Density.
Brain networks are densely connected. On average, the nodes of parcellated fMRI networks are typically connected to at least 10 other nodes [ Rawls et al., 2022 ]. With some recent exceptions, most CDA methods have reduced performance and greatly increased computational cost on models with such high density [ Lam et al., 2022a ].
C8: Scale-free structure.
Brain networks are scale-free at many different resolutions, including at the resolution of fMRI parcellations [ Watts and Strogatz, 1998 , Rawls et al., 2022 ]. This small-world type of connectivity is characterized by having a small number of extremely well-connected nodes, aka hubs. These hubs play critical roles in organizing complex brain functions where multiple regions interact [ van den Heuvel and Sporns, 2013 , Crossley et al., 2014 ]. For example, the anterior cingulate cortex (ACC) is densely interconnected with other subcortical and cortical regions, receiving information about emotion and valuation from subcortical brain systems and sending information about the need for control to other brain regions. The extremely high connectivity of these nodes may make them more difficult for some CDA methods to learn, especially as many methods encode sparsity biases that prefer models with more distributed connectivity (like Erdos-Renyi models) [Erdős and Ruciński, et al., 1960, Karoński, 1997 ].
C9: Limited Samples.
fMRI brain data from a single session typically has sample sizes (number of images/frames) that range from several hundred to two thousand [ Volkow et al., 2018 , Elam et al., 2021 , Alfaro-Almagro et al., 2018 ]. Modern imaging protocols involve imaging at a rate of about 1 capture per second, and the participant is required to stay extremely still. This includes not swallowing and not blinking too much. This is not a comfortable experience, making the total duration of scanning, and thus the total sample size from a single session, necessarily limited. Methods that require more than a few thousand data points are thus not feasible for fMRI, unless they are intended for analyzing multiple sessions or subjects (and thus not focused on our goal of modeling an individual person’s causal connectome at a point in time).
To summarize, the ideal analysis will (1) preprocess the data in a way that cleans as many artifacts as possible while enabling the chosen CDA method, (2) be able to recover causal cycles, (3) be relatively unaffected by or explicitly model temporal undersampling, (4) allow for the possibility of unmeasured confounding, (5) not become biased by spatial smoothing, (6–7) scale to, and retain strong performance on, data with hundreds of densely-connected (average degree 10 or more) variables, (8) retain strong performance for hub nodes in scale free models, (9) achieve all of the above on data with between hundreds and two thousand samples.
INTERDEPENDENCIES AMONG CDA AND FMRI PROCESSING METHODS
This section briefly covers some of the more important ways in which choice of CDA method and choice of fMRI processing methods interact, and how these complexities can be successfully navigated.
First, as discussed above, the better CDA methods for learning cycles reliably requires using non-Gaussian statistics, however many preprocessing methods force the fMRI data to conform to a Gaussian distribution. While the Cyclic Causal Discovery (CCD) algorithm [ Richardson, 1996 ] can recover causal graphs with cycles from Gaussian data, this algorithm performs poorly on finite samples and is rarely used. Methods exploiting non-Gaussian structure in BOLD data achieve higher precision and recall with simulated BOLD data [19]. Fortunately, there are approaches to preprocessing fMRI that do not completely remove the non-Gaussian signal. Those approaches are thus recommended for using CDA on fMRI data.
Preprocessing removes artifacts and recovers physiological brain signals via some combination of temporal filtering, spatial smoothing, independent components analysis (ICA), and confound regression [ Glasser et al., 2013 ]. Some of these steps, particularly temporal filtering, can drastically modify the data distribution. For example, [ Ramsey et al., 2014 ] demonstrated that certain high-pass temporal filters made parcellated fMRI time series more Gaussian. This effect was particularly strong for Butterworth filters, which were applied in [ Smith et al., 2011 ]. As such, it is likely that the results of [ Smith et al., 2011 ] were unrealistically pessimistic with regards to methods that assume non-Gaussianity. This effect holds true for the filter built into the Statistical Parametric Mapping (SPM) software, while being negligible for the filter built into the fMRIB Software Library (FSL) software. Thus, high-pass filtering including the specific software and filter used is a critical point of attention during data preparation for CDA.
For filtered data that maintain non-Gaussianity in the BOLD signal, it’s crucial to confirm the data meet distributional assumptions. A recent cortex-wide human causal connectome analysis discovered that minimally preprocessed cortical BOLD signal was non-Gaussian for all subjects [ Rawls et al., 2022 ]. In that same dataset, however, the subcortical parcel time series are not non-Gaussian. This could stem from Gaussian noise corrupting non-Gaussian BOLD activity, especially since subcortical regions typically display low signal-to-noise ratios. There is potential for enhancing BOLD data’s compatibility with CDA methods by employing newer techniques that eliminate Gaussian noise, like the NORDIC method [ Moeller et al., 2021 , Vizioli et al., 2021 ], which suppresses Gaussian thermal noise from high-resolution scan parameters. However, to date we are unaware of any studies that pursue this combination of preprocessing methods and CDA.
ADDITIONAL COMPLICATIONS OF CDA ON FMRI
Regarding the challenges of high dimensionality and limited samples, this area fortunately has numerous causal discovery solutions. All of the methods discussed in Section 3 are capable of scaling to the number of parcels found in the most widely used parcellations, while maintaining good performance (although both runtime and performance can vary substantially across these methods).
The structural challenges of high-density and scale-free models also have some solutions. In particular, recently developed permutation-based methods such as GRaSP and BOSS both retain their high performance as model density increases. These methods have increased computational cost compared to faster methods like fGES, but can still scale comfortably to hundreds of parcels, even on a personal computer. Most other methods appear to have substantial drops in performance as density increases, so using one of the few methods that tolerates high density models is recommended.
Since fMRI preprocessing may leave non-Gaussian marginal distributions, it’s worth considering whether the CDA methods that assume a linear-Gaussian model still perform well. In general, such methods retain their performance for edge adjacencies [ Smith et al., 2011 ], while their performance on edge orientations has mixed results. There are known cases where the orientations become essentially random [ Smith et al., 2011 ], while we have observed other cases where the orientations only exhibit a slight drop in accuracy.
Collectively, the challenges and available tools point towards a particular approach:
to ensure sample size is not too small (C9), analyze data from study protocols that allow for adequate scanner time for each individual session;
for preprocessing (C1), remove as many fMRI artifacts as possible while retaining as much non-Gaussianity in the marginals of the parcellated time-series data as possible;
for undersampling (C3) and spatial smoothing (C5), use a cross-sectional approach to analyze parcelations;
due to high dimensionality (C6), high density (C7), and scale-free brain structure (C8), use a scalable high-density-tolerant method to learn the skeleton (adjacencies) of the parcels;
in order to learn cycles (C2) use a non-Gaussian orientation method to re-orient edges;
In consideration of possible latent confounding (C4), one can either use a scalable CDA method capable of learning both cycles and latent confounding, or focus the primary results on aggregate features of the model, such as connections among multiple parcelations in shared networks, rather than on individual edges.
The next section reviews a project where this general approach was taken.
CASE STUDY: APPLICATION OF THIS APPROACH TO THE HUMAN CONNECTOME PROJECT (HCP)
A previous project can serve as an example of the above thought process [ Rawls et al., 2022 ]. In that project, the authors made the following considerations with respect to the 9 challenges.
Challenge 1 (preprocessing):
The minimal HCP processing pipeline [ Glasser et al., 2013 ] was used, to conserve as much non-Gaussian signal as possible and enable the use of non-Gaussian CDA methods for learning cycles. Non-Gaussianity of data were statistically verified by simulating surrogate Gaussian data for comparison.
Challenge 2 (cycles):
RSkew was used to re-orient edges [ Hyvärinen and Smith, 2013 ] after confirming non-Gaussianity of the preprocessed cortical parcellations, enabling discovery of cycles involving three or more variables.
Challenge 3 (undersampling):
The time series element of the data was ignored, and the parcellated time series were instead analyzed as cross-sectional data. While this approach does not make use of the available time-order information, it avoids relying on the heavily undersampled time dimension of the data.
Challenge 4 (latents):
No effort was made to directly model or account for latent variables. Findings were reported at an aggregate level rather than individual edges.
Challenge 5 (spatial smoothing):
These data were only minimally smoothed (2 mm in surface space) [ Glasser et al., 2013 ], thus excessive smoothing was not introduced in the data. In addition, the parcellated time series was analyzed rather than voxels, which further reduced the impact of smoothing.
Challenge 6 (high dimensionality):
A 360-node cortical parcellation was used [ Glasser et al., 2016 ]. FGES was used for the more computationally intensive adjacency search, which is among the most scalable methods for CDA [ Ramsey et al., 2017 ].
Challenge 7 (high density):
The study used fGES, which scales well but can have lower performance for extremely dense graphs. Better performance for dense brain graphs could potentially be achieved by applying a high-density-tolerant algorithm such as GRaSP [ Lam et al., 2022a ].
Challenge 8 (scale free):
Rawls et al. [2022 ] reported the existence of nodes that were more highly connected than expected under chance, which is characteristic of scale-free networks. See Figure 3 . However, for especially highly-connected hub regions, some methods such as GRaSP might provide higher precision for assessing scale-free structure in future studies.
Challenge 9 (limited samples):
The HCP collected two runs of fMRI per day of 1200 images per run. The study applied CDA to concatenated standardized time series from these two runs, thus the number of samples was extremely high (2400 total). The CDA methods that were selected are also known to have good finite-sample performance.
Overall:
This recent large-scale application of CDA for deriving individualized causal connectomes addressed many of the challenges we identified. However, the challenges of high density and scale-free connectivity could potentially be better addressed by applying newer permutation-based CDA methods [ Lam et al., 2022b ]. Several challenges, such as limited samples, spatial smoothing, and preprocessing, were partially or entirely solved by the specific data set the method was applied to, and might pose problems in other data sets.
RESEARCH GAPS AND PROMISING FUTURE DIRECTIONS
We have outlined nine challenges researchers will face when attempting to apply CDA to parcellations of fMRI data, as well as some available CDA technologies and their ability to overcome those challenges. The case study discussed in Section 6 attempted to use a mixture of strategies to overcome those challenges, but many challenges remained only partially addressed, or were even largely ignored. In this section, we review the current research gaps as we perceive them, and point towards future directions to empower future applications of CDA to better elucidate the brain’s causal connectome for both scientific and medical purposes.
Gap 1: CDA methods for high-dimensional, high-density, scale-free models.
Previous work [ Lam et al., 2022a ] has shown that many popular CDA methods unfortunately do not perform well when nodes have larger numbers of connections to other nodes. The only currently published method that appears to overcome this limitation is limited to about 100 variables, which is substantially fewer than most parcellations [ Glasser et al., 2016 , Schaefer et al., 2018 ]. We are aware of research on new algorithms and CDA implementation technologies that may overcome this gap, however that work has not yet been published. However, such methods could be used as a replacement for methods like fGES in future fMRI studies.
Gap 2: Reliance on skewed data.
While it is the case for some fMRI data that minimal preprocessing is able to retain statistically significant skew, we are also aware of other fMRI data where even after only minimal preprocessing the data are not significantly skewed. One possible future direction would be to incorporate additional information from higher moments such as kurtosis to ensure that the non-Gaussian orientation methods can be used as much as possible. Another approach would be to distill as much non-Gaussian signal as possible using a method like independent components analysis (ICA) [ Comon, 1994 ] to construct a new set of features from the parcellated time series, and then perform CDA on the maximally non-Gaussian components of each parcel instead.
Gap 3: Latent variables.
While there exist CDA methods that do not assume causal sufficiency, and thus can tolerate and even identify unmeasured common causes, they generally have significant difficulties with other challenges. For example, the standard methods for handling latent variables, like FCI and GFCI, do not allow for cycles, have limited scalability, and perform poorly for high-density models. The Two-Step algorithm [ Sanchez-Romero et al., 2019 ] can in theory incorporate unmeasured confounding in its models, however we are not aware of any theoretical or practical evaluation of its performance in the presence of unmeasured confounding.
Gap 4: Extension to other brain imaging technologies.
Future exploration should expand CDA methodology to neural temporal data beyond BOLD signals (fMRI data). For example, electroencephalography (EEG) provides a dynamic view of brain activation with exceptional temporal resolution. Current EEG causal connectivity analysis techniques, such as Granger causality, are fruitful yet limited. These current methods do not differentiate brain oscillations from aperiodic activity, which is critical given recent evidence that aperiodic activity sometimes wholly explains group differences in power spectral density [ Merkin et al., 2023 ]. Techniques have recently emerged separating aperiodic and oscillatory contributions to EEG power spectra [ Donoghue et al., 2020 ], even extending to time-frequency domain for time-resolved separation [ Wilson et al., 2022 ]. However, these haven’t been incorporated into neural connectivity analyses. EEG data also present challenges in identifying effective connectivity patterns due to volume conduction – instant, passive electricity conduction through the brain separate from actual neural interactions, resulting in non-independent EEG sensor-level estimates [ Nunez and Srinivasan, 2006 ]. This non-independence hampers CDA, necessitating volume conduction removal from data before connectivity estimation (EEG preprocessing). We suggest these obstacles could be mitigated by first removing volume conduction from EEG data, then separating aperiodic and oscillatory spectral contributions. Applying CDA to isolated EEG oscillatory power estimates could potentially reveal effective connectivity patterns unhindered by aperiodic activity. | Acknowledgements
ER was supported by the National Institutes of Health’s National Center for Advancing Translational Sciences, grants TL1R002493 and UL1TR002494. BA was supported by the Comorbidity: Substance Use Disorders and Other Psychiatric Conditions Training Program T32DA037183. EK was supported by grants P50 MH119569 and UL1TR002494. | CC BY | no | 2024-01-16 23:35:06 | ArXiv. 2023 Dec 20;:arXiv:2312.12678v1 | oa_package/c9/35/PMC10775354.tar.gz |
|
PMC10775355 | 38196754 | Materials and Methods
Flies were maintained using standard methods, and embryos were collected and prepared for imaging and laser surgery as previously described ( 35 , 66 – 68 ). Cell junctions were labeled via ubiquitous expression of DE-cadherin-GFP ( 69 ). Images were captured using Micro-Manager 2.0 software (Open Imaging) to operate a Zeiss Axiovert 200 M microscope outfitted with a Yokogawa CSU-W1 spinning disk confocal head (Solamere Technology Group), a Hamamatsu Orca Fusion BT camera, and a Zeiss 40X LD LCI PlanApochromat 1.2 NA multi-immersion objective (glycerin). Due to the embryo’s curvature, multiple z planes were imaged for each embryo at each time point to observe the dorsal opening. We recorded stacks with eight z-slices with 1 μm step size every 15 s throughout the closure duration, with a 100 ms exposure per slice.
Two-dimensional projections of the AS tissue were created from 3D stacks using DeepProjection ( 70 ). A custom Python algorithm was used to segment and track individual AS cells throughout dorsal closure ( 32 ): Briefly, binary masks of the AS cell boundaries and the amnioserosa tissue boundary (leading edge) were first predicted from microscopy movies using deep learning trained with expert-annotated dorsal closure specific data ( 71 ). Second, individual AS cells were segmented and tracked throughout the process using the watershed segmentation algorithm with propagated segmentation seeds from previous frames. Finally, for each cell, area, perimeter, aspect ratio and orientation in relation to the AS anterior-posterior axis were quantified over time. Based on the binary mask of the leading edge, we segmented the dorsal hole/AS shape, fitted an ellipse to it at each time point, and located the centroid position of each cell with respect to the long and short axis of the ellipse. This allowed us to precisely identify cells in the amnioserosa center (within 75% of the semi-major axis and 90% of the semi-minor axis), and exclude peripheral cells from comparisons between model and experiment. The straightness of cell-cell junctions was quantified by segmenting the contour and end-to-end lengths of individual junctions using a graph-based algorithm ( 32 ).
Laser surgery was performed on a Zeiss Axio Imager M2m microscope equipped with a Yokogawa CSU-10 spinning disk confocal head (Perkin Elmer), a Hamamatsu EM-CCD camera and a Zeiss 40X, 1.2 NA water immersion objective. Micro-Manager 1.4.22 software (Open Imaging) controlled the microscope, the Nd:YAG UV laser minilite II (Continuum, 355 nm, 4 mJ, 1.0 MW peak power, 3–5 ns pulse duration, 10 Hz, ( 72 )) and a steering mirror for laser incisions. In each embryo (N =48), 1 to 2 cuts of approx. 5 μm length with a laser setting at 1.4 μJ were performed in the bulk of the AS at different stages of closure ( 67 , 73 , 74 ) ( Fig. S6A , B ). The response of the AS was recorded prior to (~ 20 frames), during (~ 4 frames) and after (~ 576 frames) the cut at a frame rate of 5 Hz. The junction straightness of each cut junction was manually quantified prior to the cut by manually tracing junction end-to-end length and junction contour length using ImageJ. Then, to analyze the initial recoil velocity, the motion of the vertices adjacent to the cut junction was followed in a kymograph perpendicular to the cut (line thickness 2 μm, Fig. S6A – C ). On the basis of the kymograph, the distance between the two vertices of the severed junction was quantified manually over time using ImageJ. A double exponential function was fitted to ( Fig. S6D ). The initial slope of this function at corresponds to the initial recoil velocity .
For the vertex model, we used the open-source CellGPU code ( 75 ). Analysis and illustration of model and experiment data was performed with custom Python scripts. Simulation code will be published on GitHub upon publication. Data associated with this study are available upon request. | Results
We tracked the following quantities during dorsal closure, in model and experiments: mean cell shape index , mean aspect ratio (see SI section D ), orientational order parameter ( 44 ) (see SI section E ) characterizing the degree of cellular alignment (where is the angle between the major axis of each cell and the anterior-posterior axis, for randomly aligned cells and for cells perfectly aligned with the AP axis), standard deviation of cell shape index and standard deviation of the aspect ratio .
We compare experimental data and simulations without any parameter modifications, adjustments, or rescaling with time. Considering the simplicity of the model, the agreement is remarkably good, both for cell shape and cell shape variability ( Fig. 2A , B ) as well as cellular alignment ( Fig. 2C ). The mean and standard deviation of cell aspect ratio agree equally well ( SI F and Fig. S2 ). As expected, the error bars (shaded region) in the experimental data, which represent variations between different embryos, are significantly larger than those in the simulation data, which represent only variations between initial configurations based on a single distribution of cell shape indices from the distribution measured over all embryos ( Fig. 1C ). In the experiments, there is intrinsic embryo-to-embryo variability that we did not include in our model for simplicity.
In experiment and model, the mean shape index initially decreases, reaches a minimum at and then increases ( Fig. 2A ). In the model, this behavior arises from two competing effects. (i) Decreasing preferred mean perimeter ( Fig. 1E ) implies a decreasing preferred mean shape index, . According to Eq. 1 , this tends to drag down , causing the decrease up to . (ii) As dorsal closure progresses, the overall shape of the tissue becomes more and more anisotropic ( Fig. 1A , B ), increasing . This effect eventually dominates for . This competition between decreasing and increasing anisotropy is also reflected in the width of the -distribution, ( Fig. 2B ). In the model, the standard deviation of the preferred shape index, , is fixed, but cell to cell variations of the energy due to increase with decreasing , leading to a narrowing of the distribution, or a decrease in . On the other hand, vertical shrinking of the system late in the closure process leads to greater ( Fig. 2B ). The increasing anisotropy during closure leads to greater alignment of cells along the anterior-posterior axis, reflected in an increased orientational order parameter ( Fig. 2C ).
A strength of the vertex model is that it predicts not only cell shape and orientation distributions but also mechanical cell-level properties of the AS, including, for example, the cell junction tension , defined as ( 24 , 45 ) where , denote cells that share a given junction. Relative values of junction tension can be estimated experimentally from the initial recoil velocity when a junction is severed using laser ablation ( 14 , 46 – 49 ). Our model predicts that the average junction tension rises until the fractional area of the AS reaches , and then decreases as dorsal closure continues ( Fig. 2D ). To test this prediction, we conducted laser cutting experiment at different stages of closure. We find that the recoil velocity changed in a non-monotonic manner ( Fig. 2D ), as the model predicts.
An alternative way to estimate junction tension from imaging data of unperturbed embryos is to analyze the straightness of junctions. A wiggly junction would be expected to be free of tension, whereas a straight junction should support tension. We define junction straightness as ( Fig. 2E , inset), where is the distance between vertices for a given junction and is the contour length of the junction. We examined the relation between and the initial recoil velocity upon cutting, , and observed that is independent of for but then rises linearly with increasing above this threshold ( Fig. 2E ). It is reasonable to assume that junction straightness is proportional to the tension predicted by our model. This is verified in Fig. 2F , showing the same non-monotonicity for both quantities with peaks occurring at .
Why is the junction tension non-monotonic? In vertex models, junction tension and cell stiffness are related to cell shape index ( 16 , 22 , 23 , 45 ). According to Eq. 2 , junction tension is given by the difference between cell preferred perimeter and the actual perimeter for the two cells sharing a given junction. Below , increases with , leading to an increase of . For , decreases, leading to a decrease of .
A striking result of the standard vertex model ( Eq. 1 ) is the prediction of a transition from solid to fluid behavior as the average shape index increases above ( 16 ), in excellent agreement with a number of experiments in various epithelial tissue models ( 22 , 50 – 52 ). Inspection of Fig. 2A shows that during the entire process of dorsal closure, suggesting that the AS should be fluid. However, the complete absence of T1 events (cell neighbor changes) shows conclusively that the AS is not fluid but solid.
Which of the extensions of the standard vertex model ( Eq. 1 ) that we have incorporated in our model are responsible for the solid nature of the AS? It is known that cellular shape heterogeneity ( 33 ) and orientational ordering ( 11 ) both enhance rigidity in vertex models (detailed analysis of orientational alignment see SI section H ). In our case, cellular heterogeneity remains essentially constant during dorsal closure, but orientational ordering increases due to uniaxial deformation. Isotropic deformation ( SI section I ), in contrast, does not lead to orientational order ( Fig. S5B ), as one might expect. The incorporation of uniaxial deformation is important since trends in , and junction tension ( Fig. S5A , D , C ) with closure fail to agree with experimental results if we apply isotropic deformation. However, we find that our model predicts solid behavior even for isotropic deformation, showing that uniaxial deformation is not needed for this aspect. We also find that cell ingression at the levels seen experimentally has almost no effect on the solid behavior in the model. This leaves the progressive decrease of preferred cell perimeter as crucial for maintaining solid response.
For the tissue to behave as a solid, non-zero tension cell junctions must form continuous paths that extend across the entire system in all directions ( 33 , 53 )–in other words, they must percolate . Percolation requires the fraction of junctions with non-zero tension, to be larger than a critical fraction . The rigidity transition can be driven either by altering or , or both. Note that for random Voronoi tessellations in a square system, ( 54 – 56 ). The topology of such networks is similar to that of the standard vertex model ( Eq. 1 ), so that can be taken as a reasonable approximation. We show in the Supplemental Information section G that remains fixed during dorsal closure, despite uniaxial deformation of the AS. Interestingly, the tissue would fluidize without the progressive decrease of preferred cell perimeter ( Fig. S3B ).
Fig. 3 summarizes our results in the form of a phase diagram of our model, obtained by evaluating the fraction of rigid junctions at discrete values of mean cell shape index and . The phase boundary corresponds to the percolation transition of nonzero-tension junctions, . The system always remains in the solid phase, consistent with the experimental observations (blue dashed line). As explained earlier, initially decreases because the decreasing preferred perimeter pulls actual cell perimeters down. Eventually, however, the elongation of cells due to uniaxial deformation overcomes this effect, causing to increase. | Discussion
We find experimentally that the AS remains in a solid phase ( i.e . with no cell neighbor exchanges) during dorsal closure. One might not be surprised since cells adhere to each other. It is important to realize, though, that cadherins have rapid on-off kinetics and the actin cortex has rapid turnover on the time scale of dorsal closure. As a result, adhesion cannot necessarily prevent cell neighbor switching; it merely guarantees tissue cohesion. Indeed, the standard vertex model predicts that when cells are highly elongated, the barriers to neighbor switching should be low and the tissue should be fluid ( 11 , 17 , 22 , 23 , 57 ). from the AS cell shapes that barriers should be low and that the AS should be a fluid. Our simple extension of the standard vertex model not only predicts that the tissue should be in the solid phase but also faithfully reproduces a wide range of characteristics of an extensive set of experimental dorsal closure data: cell shape and orientational order, and junction tension, which we inferred passively from image data due to the linear relationship between junction straightness and initial recoil velocity in laser cutting experiments.
Our model achieves this good agreement with only two parameters that are directly derived from experiments. We find that shape polydispersity and active shrinking of the preferred cell perimeters are the two critical factors that enable the tissue to remain solid in spite of extensive cellular and tissue shape changes. These results imply that the solid character of the AS originates from active processes that regulate cell perimeter, including junction complexes and the components of the cell cortex.
This finding raises two questions for future research. First, how is the removal of junction material specifically regulated in cells? Second, why might it be important for the AS to remain in a solid phase? Perhaps solid behavior during dorsal closure is simply a holdover from the preceding developmental stage of germ band retraction ( 58 ). Laser ablation experiments ( 59 ) suggest that the AS plays an important assistive role in uncurling of the germ band by exerting anisotropic tension on it. Such anisotropic stress requires the AS to be a solid, not fluid. An interesting future direction for experimental and vertex model studies is to establish whether the AS is solid throughout germ band retraction as well as dorsal closure.
Our results show that vertex models are more broadly applicable than previously thought. Despite the many complex active processes that occur during dorsal closure, we find that only one of them–the active shrinking of a normally-fixed parameter, namely the preferred perimeter–is needed in order to quantitatively describe our experimental observations Similar variation of normally-constant parameters has been shown to allow other systems to develop complex responses not ordinarily observed in passive non-living systems. These include negative Poisson ratios ( 60 , 61 ) and allostery ( 62 ) in mechanical networks, greatly enhanced stability in particle packings ( 63 ), and the ability to classify data and perform linear regression in mechanical and flow networks ( 64 ) as well as laboratory electrical networks ( 65 ). More generally, the mechanical behavior of epithelial tissues during development is extraordinary when viewed through the lens of ordinary passive materials. It remains to be seen how much of that behavior can be understood using “adaptive vertex models” ( 41 ) within a framework that replaces ordinarily fixed physical parameters with degrees of freedom that vary with time. | All authors conceived and designed the research project. IT and DH performed the vertex model simulations and analyzed and visualized the data from model and experiments. JC and DPK contributed experimental data. All authors contributed to data interpretation and collaborated on writing the manuscript.
Authors contributed equally.
Dorsal closure is a process that occurs during embryogenesis of Drosophila melanogaster . During dorsal closure, the amnioserosa (AS), a one-cell thick epithelial tissue that fills the dorsal opening, shrinks as the lateral epidermis sheets converge and eventually merge. During this process, the aspect ratio of amnioserosa cells increases markedly. The standard 2-dimensional vertex model, which successfully describes tissue sheet mechanics in multiple contexts, would in this case predict that the tissue should fluidize via cell neighbor changes. Surprisingly, however, the amnioserosa remains an elastic solid with no such events. We here present a minimal extension to the vertex model that explains how the amnioserosa can achieve this unexpected behavior. We show that continuous shrink-age of the preferred cell perimeter and cell perimeter polydispersity lead to the retention of the solid state of the amnioserosa. Our model accurately captures measured cell shape and orientation changes and predicts non-monotonic junction tension that we confirm with laser ablation experiments. | The developmental stage of dorsal closure in Drosophila melangaster occurs roughly midway through embryogenesis and provides a model for cell sheet morphogenesis ( 1 – 4 ). The amnioserosa (AS) consists of a single sheet of cells that fills a gap on the dorsal side of the embryo separating two lateral epidermal cell sheets. During closure, the AS shrinks in total area, driven by non-muscle myosin II acting on arrays of actin filaments in both the AS and actomyosin-rich cables in the leading edge of the lateral epidermis ( 5 – 8 ). Ultimately, the AS disappears altogether. The entire closure process is choreographed by a developmental program that mediates changes in AS cell shapes as well as forces on adherens junctions between cells ( 9 , 10 ).
One might naively expect cells in the AS, which are glued to their neighbors by molecules such as E-cadherin, to maintain their neighbors, so that the tissue behaves like a soft, elastic solid even as it is strongly deformed by the forces driving dorsal closure. However, the time scale for making and breaking molecular bonds between cells (ms) is far faster than the time scale for dorsal closure (hours). As a result, cells can potentially slip past each other while maintaining overall tissue cohesion. Such neighbor changes could cause epithelial tissue to behave as a viscous fluid on long time scales rather than an elastic solid, as it does during convergent extension ( 11 ). Vertex models ( 12 – 21 ) have provided a useful framework for how tissues can switch between solid and fluid behavior ( 16 , 17 ), and have had remarkable success in describing experimental results ( 11 , 22 – 25 ). These models make the central assumption that internal forces within a tissue are approximately balanced on time scales intermediate between ms and hours, and have successfully described phenomena such as pattern formation, cell dynamics, and cell movement during tissue development ( 26 ). Force balance is captured by minimizing an energy that depends on cell shapes. In the standard vertex model, energy barriers are lower when cells have high aspect ratios, so higher/lower cell aspect ratios correspond to fluid/solid behavior.
During dorsal closure, significant changes in AS cell shapes are observed. According to the standard vertex model, the observed high values of mean cell shape aspect ratio should render the tissue fluid ( 15 – 17 ). Nonetheless, there is considerable experimental evidence that the AS remains solid during dorsal closure with no neighbor exchanges ( 27 – 29 ). We have examined individual junction lengths using live embryo imaging in an extensive data set comprising 10s of embryos, each with 100s of cells and, in agreement with the literature, did not find any vanishing junctions, and hence any neighbor exchanges, except when cells left the AS (cell ingression).
Vertex models might simply fail to describe tissue mechanics at this stage of development. The success of vertex models in describing many other tissues, however, begs the question: can the models be tweaked to capture tissue mechanics of the AS during dorsal closure, and could this point to an important physiological control mechanism? Here we introduce a minimal extension to the standard vertex model that quantitatively captures results from comprehensive experimental datasets obtained from time-lapse microscopy recordings.
Modeling and experimental analysis
Our starting point is a standard two-dimensional cellular vertex model ( 12 – 21 , 30 , 31 ) (short introduction to vertex models in SI section A ). The AS is represented as a single-layer sheet of polygonal cells that tile the entire area, as described below. In our model, we approximate the shape of the AS tissue ( Fig. 1A ) with a rectangle whose long axis corresponds to the anterior-posterior axis of the embryo ( Fig. 1B , details see SI section B ). During simulated dorsal closure, the positions of the vertices are continually adjusted to maintain the mechanical energy of the tissue at a minimum, or equivalently, to balance the forces exerted on each vertex. The mechanical energy of the standard vertex model is defined as where is the total number of cells, and are the actual cell perimeters and areas, and are the preferred cell perimeters and area, and and represent the perimeter and area elastic moduli of the cells, respectively. The first term penalizes apical area changes away from a preferred value, and can arise from cell height changes as well as active contractions in the medio-apical actin network at constant or near constant volume. The second term combines the effects of actomyosin cortex contractility with cell-cell adhesion, where is the effective preferred cell perimeter ( 16 ). For simplicity, we chose for all cells.
We used time-lapse confocal microscopy to image the entire dorsal closure process in E-cadherin-GFP embryos. We then used our custom machine-learning-based cell segmentation and tracking algorithm to create time series of cell centroid position, area, perimeter, aspect ratio, and individual junction contour lengths for every cell in the AS ( 32 ). At the onset of closure we find that cells in the AS exhibit considerable variability of the cell shape index ( Fig. 1C ). In the model, we therefore introduce initial polydispersity in the cell shape index through a normal distribution of preferred cell perimeters . We fix the preferred cell area and use it to set our units so that for all cells, following Ref. ( 33 ). The distribution of actual shape index after minimizing the mechanical energy in the model is in excellent agreement with the experiments ( Fig. 1C ).
During a substantial part of closure ( Fig. 1A ), the leading edges of the two flanking epithelial sheets approach the dorsal mid-line at a roughly constant rate ( 7 ). To mimic these dynamics, we linearly decreased the vertical height of the rectangle representing the AS ( Fig. 1B ) by 0.125 % of the initial height at every step while holding the width fixed. We enforce force balance, minimizing the mechanical energy after each deformation step. We used periodic boundary conditions throughout.
Since closure rates varied from embryo to embryo, we measured progress during closure not in terms of time , but in terms of fractional change of the total area of the exposed AS (i.e. the dorsal opening), , where is a reference area of the AS early during closure. In many prior studies ( 34 , 35 ), the height of the AS has been used as a descriptor of closure progress. In Fig. S1 we demonstrate that both height and area of the AS decreased monotonically and approximately linearly with time, validating our use of to mark the progression of closure. We began the analysis of each embryo at , so that we could average over multiple embryos. To exclude complex tissue boundary effects, we excluded cells at the AS borders and the regions at the canthi in the comparison between model and experiment ( Fig. 1D ).
AS cells reduce their perimeter (inset Fig. 1E ) ( 36 ) during the closure process by removing a portion of junction material and membranes through endocytosis, while maintaining junction integrity ( 37 , 38 ). The average perimeter shrinks at a constant rate in the experiments ( Fig. 1E ). We therefore assume in the model that the preferred perimeter of each cell decreases linearly with at the same rate (details see SI section B ). Note that we do not change the preferred area per cell, . For a more realistic model we could change the preferred area in proportion to the total area of the AS as it shrinks, but that would change only the pressure, and would have no effect on the rigidity transition ( 39 – 41 ).
During dorsal closure, ~ 10% of AS cells ingress into the interior of the embryo ( 3 , 42 , 43 ) (additional cells ingress at the canthi and adjacent to the lateral epidermis). In the model, we removed cells randomly at the experimentally measured rate (see details in SI ) so that roughly 10% of the AS cells disappeared over the course of dorsal closure.
For further details of the model and the experiments, see Materials and Methods and Supplemental Information .
Supplementary Material | ACKNOWLEDGMENTS.
We thank M. L. Manning and S. R. Nagel for instructive discussions. This project was supported by NIH through Awards R35GM127059 (DPK) and 1-U01-CA-254886-01 (IT), NSF-DMR-MT-2005749 (IT, AJL) and by the Simons Foundation through Investigator Award #327939 (AJL). AJL thanks CCB at the Flatiron Institute, as well as the Isaac Newton Institute for Mathematical Sciences under the program “New Statistical Physics in Living Matter” (EPSRC grant EP/R014601/1), for support and hospitality while a portion of this research was carried out. | CC BY | no | 2024-01-16 23:35:07 | ArXiv. 2023 Dec 20;:arXiv:2312.12926v1 | oa_package/a1/0b/PMC10775355.tar.gz |
||
PMC10775356 | 38196743 | Introduction
Cochlear implants are a successful neuroprosthetic that can restore hearing to people with severe sensorineural hearing loss. While partially implanted, they rely on an external hearing aid microphone that is positioned on the side of the head. The external nature of this microphone imposes many lifestyle restrictions on cochlear implant users. Patients cannot swim or play certain sports while wearing the external unit, nor can they wear it while sleeping. Additionally, an external microphone does not provide the pressure gain and sound localization cues derived from the outer ear structure. Engineering a practical internal microphone would enable a totally-implantable cochlear implant. Although development of implantable microphones has been ongoing for years, none are currently on the market. Technical approaches range from fiber-optic vibrometery [ 1 ] to capacitive displacement sensing [ 2 ]. Two devices are currently in clinical trials: a piezoelectric sensor called the Acclaim by Envoy [ 3 ] [ 4 ] and a subcutaneous microphone called Mi2000 by MED-EL [ 5 ] [ 6 ]. There is very little information available about either device, and they remain in testing.
The microphone reported here is a piezoelectric sensor paired with a charge amplifier that we call the “UmboMic”. We refer to the piezoelectric sensor as the “UmboMic sensor” and the sensor connected to the amplifier as the “UmboMic apparatus.” The UmboMic sensor detects the motion of the umbo, which is the tip of the malleus that attaches to the conical point on the underside of the eardrum. Figure 1 shows a picture of the UmboMic sensor in contact with a human umbo. Umbo displacement is large for all auditory frequencies and mostly unidirectional in humans, making it an ideal target for sensing motion. By sensing the umbo, the UmboMic apparatus has an advantage over microphones that target other parts of the ossicular chain. For example, the Acclaim by Envoy targets the incus body, which has complex modes of motion around 2 kHz.
We build the UmboMic sensor out of a thin film piezoelectric polymer called polyvinylidene difluoride (PVDF). PVDF is excellent for our application because it is highly flexible and biocompatible [ 7 ]. Typically, PVDF is considered a poor choice for small-area sensors as it is less sensitive than piezoelectric ceramics. To overcome this limitation, our design relies on the differential measurement between two layers of PVDF connected to an extremely low noise amplifier to boost the signal-to-noise ratio. This paper presents a prototype PVDF sensor and an accompanying custom low-noise differential charge amplifier. The reported UmboMic apparatus exhibits high sensitivity and low noise comparable to commercially available hearing-aid microphones such as the Sonion 65GG31T [ 8 ] and the Knowles EK3103 [ 9 ]. In the next steps, we are advancing the microphone with fully biocompatible, decades-durable materials. | Results
A hearing device should ideally have a flat frequency response from 100 Hz to 4 kHz, as this is the frequency range of human speech [ 20 ]. The UmboMic apparatus performs well between 100 Hz to 7 kHz, with the frequency response determined mostly by the middle ear impedance; Figure 10 shows the frequency response of the UmboMic apparatus normalized to ear canal pressure (where the responses were confirmed to be in the linear region). Below about 1 kHz, the middle ear is spring-like and the frequency response of the UmboMic apparatus is flat. Above 5 to 6 kHz the mass of the eardrum and ossicles dominates, causing umbo motion and thus sensor output to start decreasing.
The pinna and ear canal act like a horn and provide up to 20 dB of pressure gain between 2 kHz and 6 kHz [ 21 ]. The cadaveric specimens that we work with no longer have the pinna attached, but we can use the known transfer function of the pinna [ 22 ] to simulate the pressure gain from the outer ear and extrapolate free field data. The dotted line in Figure 10 shows the result from including the pinna. The grey line shows the noise floor of the sensor in units of fC.
We can compare the UmboMic apparatus to existing microphones through equivalent input noise (EIN), the level of acoustic noise due to the intrinsic electrical noise of the system. We compute EIN by dividing noise floor by sensitivity and normalizing to 1/3-octave bandwidth. The EIN is a critical metric as it is related to the lowest sound that the microphone can sense. Figure 11 shows the UmboMic apparatus EIN compared to that of a commercial hearing aid microphone, the Knowles EK3103. We additionally simulate the EIN when including pressure gain from the outer ear. When accounting for this pressure gain, the UmboMic apparatus is competitive—we measured an A–weighted EIN of 32.3 dB SPL from 100 Hz to 7 kHz. Our Knowles EK3103 reference microphone measured 33.8 dB SPL over the same frequency range.
Dynamic range and linearity are significant concerns for hearing aid microphones. A frequency-domain plot of the UmboMic apparatus response to a 1 kHz stimulus is shown in Figure 12 , demonstrating less than 0.1 % harmonic distortion at 94.5 dB SPL at the eardrum. At a 114.5 dB SPL stimulus level, harmonic distortion was measured to be less than 1 %. Additionally, Figure 13 shows that the UmboMic apparatus is linear across at least 80 dB of sound stimulus level.
The UmboMic apparatus also effectively rejects EMI from common sources like switched mode power supplies and 60 Hz mains hum. Our measured “EMI capacitance” was approximately 0.6 fF, which represents an improvement of roughly 54 dB from our lab’s older single-ended unshielded designs [ 23 ]. We also measured minimal interference from 60 Hz mains power and harmonics and minimal electrical coupling between the test speaker and the sensor. | Conclusion
A totally-implantable cochlear implant would significantly improve the lives of users. The microphone component is one of the largest roadblocks to internalizing the entire system. Here, we present the UmboMic, a proof-of-concept prototype of a PVDF-based microphone that senses the motion of the umbo. We demonstrate that PVDF can work well as a sensing material if designed as double-layered and paired with a very low-noise differential amplifier. When considering the effect of the pinna on performance, the UmboMic apparatus achieves an EIN of 32.3 dB SPL over the frequency range 100 Hz to 7 kHz—competitive with conventional hearing aid microphones. Furthermore, the UmboMic apparatus has a flat frequency response to within ~10 dB from approximately 100 Hz to 6 kHz, low harmonic distortion, excellent linearity, and good shielding against EMI.
Our prototype demonstrates the feasibility of a PVDF-based microphone. Our future goals are to re-engineer the UmboMic sensor out of biocompatible materials. We plan to use conductors such as titanium or platinum for the patterned electrodes, and replace the flex PCB with a version made in-house from biocompatible materials. Additionally, we must consider device packaging, power system, and surgical hardware to securely hold the UmboMic apparatus in place. While these engineering challenges are substantial, our results demonstrate a suitable design concept for an implantable microphone which is competitive in performance to conventional hearing-aid microphones. | Objective:
We present the “UmboMic,” a prototype piezoelectric cantilever microphone designed for future use with totally-implantable cochlear implants.
Methods:
The UmboMic sensor is made from polyvinylidene difluoride (PVDF) because of its low Young’s modulus and biocompatibility. The sensor is designed to fit in the middle ear and measure the motion of the underside of the eardrum at the umbo. To maximize its performance, we developed a low noise charge amplifier in tandem with the UmboMic sensor. This paper presents the performance of the UmboMic sensor and amplifier in fresh cadaveric human temporal bones.
Results:
When tested in human temporal bones, the UmboMic apparatus achieves an equivalent input noise of 32.3 dB SPL over the frequency range 100 Hz to 7 kHz, good linearity, and a flat frequency response to within 10 dB from about 100 Hz to 6 kHz.
Conclusion:
These results demonstrate the feasibility of a PVDF-based microphone when paired with a low-noise amplifier. The reported UmboMic apparatus is comparable in performance to a conventional hearing aid microphone.
Significance:
The proof-of-concept UmboMic apparatus is a promising step towards creating a totally-implantable cochlear implant. A completely internal system would enhance the quality of life of cochlear implant users. | Cantilever design and fabrication
The UmboMic sensor is a triangular bimorph cantilever approximately 3 mm wide at the base, 3 mm long, and 200 μm thick. The free end of the triangular tip interfaces with the umbo to sense its motion. We design the UmboMic sensor to have a relatively uniform stress distribution in the PVDF. The UmboMic sensor is fabricated with two layers of 50 μm PVDF sandwiching a 100 μm Kapton flexible printed-circuit-board (flex PCB) substrate; this construction is detailed in the following sections and in Figure 2 . The use of a Kapton flex PCB as the core layer greatly simplifies attaching cables to the device. Additionally, the PCB design allows for the ground electrode to double as a ground shield, which works in tandem with the differential sensor output to nearly eliminate electromagnetic interference.
Designing sensor dimensions
We use a triangular shape for the UmboMic sensor as it results in a uniform stress and charge distribution throughout the sensor tip. The triangular shape is a design commonly used with piezoelectric sensors and actuators [ 10 ] as it increases the sensor’s robustness by equalizing stress concentration. A triangular shaped sensor is also practical given the anatomical limitations of the middle ear. The sensor’s tapered shape allows it to slide into position without hitting the other ossicles during insertion.
There are a few factors to consider when deciding on UmboMic sensor geometry. Our sensors must be small enough to fit through a variety of middle-ear cavity surgical entrances. However, in order to maximize the charge output of our piezoelectric sensor, we want its active surface area to be as large as possible. A larger sensor is also faster and cheaper to fabricate. We found through testing that a 3 mm by 3 mm triangular sensor tip fits well within the middle ear cavity of multiple cadaveric specimens, and is large enough to produce a sufficient output charge.
Further details on the UmboMic’s sensor design are detailed in [ 11 ].
Electrode patterning
To simplify the fabrication of the UmboMic sensor, we use a flex PCB as the base substrate of the sensor. The custom flex PCB has a polyimide core with electrode and ground traces connecting to a U.FL connector solder footprint. We use photolithography to pattern triangular charge sense electrodes at the top of the flex PCB substrate, and this constitutes the active region of our sensor.
Through experimentation we found that cantilever designs with charge sense electrodes exposed to the outside of the sensor tend to have unacceptably high parasitic leakage conductance, especially in wet environments like the middle ear cavity. Our fabrication strategy revolves around pre-patterning the charge sense electrodes and then trimming the sensor to leave a margin around the electrodes, eliminating this leakage path and improving the UmboMic apparatus noise floor.
We first apply 200 nm of aluminum to both sides of the flex PCB using an AJA sputter coater. Next, we spin-coat a layer of AZ3312 positive photoresist on both sides of the sputter-coated PCB, bake for approximately two minutes at 110 °C, place in a contact photolithography mask, and flood expose for 30 seconds on each side. We then dissolve the UV-exposed photoresist and the aluminum underneath using a tetramethyl ammonium hydroxide (TMAH) solution. Finally, we dissolve the remaining photoresist in acetone. Figure 3a summarizes the stages of electrode deposition and patterning.
PVDF adhesion
Before gluing the PVDF film to the sputter coated metal, we reinforce the electrical connection between the patterned electrodes and the flex PCB traces with a silver conductive ink pen. We then sand one side of the PVDF with 3000-grit sandpaper to increase surface roughness and mask the portions of the flex PCB that must remain glue-free. Next, we generously apply epoxy between the two PVDF layers and the flex PCBs. Devcon Plastic Steel epoxy works well for bonding the PVDF to the polyimide substrate. We orient the piezoelectric films such that they have opposing polarization. Finally, we squeeze as much epoxy as possible out from between the flex PCB and PVDF film with a doctor blade, and the stackup is left to cure. This method achieves a 10 μm epoxy thickness, which is sufficiently thin to allow efficient capacitive coupling. The masking and bonding process is shown in Figure 3b .
Finishing steps
After the epoxy is cured, we trim the PVDF and flex PCB to shape with scissors leaving a buffer of approximately 300 μm between the edge of the electrode and the edge of the PCB layer of the sensor, shown in Figure 3c . This buffer serves to protect the electrodes from water ingress, which could otherwise short the sensor. We then encapsulate the sensor tip with a 200 nm layer of sputter-coated aluminum. This outer layer serves as both a ground electrode and a ground shield, protecting the charge sense electrodes from EMI. We connect the step between the PVDF and the flex PCB with conductive ink or adhesive as shown in Figure 3d . This ensures the aluminum on the PVDF layer is electrically connected to the ground pad on the PCB at the tail. Finally, we solder a U.FL receptacle on either side of the tail end of the UmboMic sensor opposite from the electrodes. Figure 2 shows the stackup of the tip of the finished UmboMic sensor.
Differential Charge amplifier
It is imperative to device performance to achieve signal amplification without introducing too much noise. By developing our own differential charge amplifier, shown in Figures 4 and 5 , we minimize the noise floor while providing a gain of 20V/pC over a −3 dB bandwidth of 160 Hz to 50 kHz. Figure 5 illustrates the charge amplifier connected to our differential sensor having capacitance and charge output . We also show parasitics , and :parasitic parallel capacitor, leakage resistor, and capacitor to ground, respectively. Estimates of the piezoelectric and parasitic component values are given in Table I . The charge-to-voltage gain of such a charge amplifier is invariant to parasitic resistance and capacitance, giving it good gain uniformity from sensor to sensor. The amplifier’s differential input interfaces with our differential-mode sensor to reduce EMI. Similar differential charge amplifiers are frequently used as low-noise preamplifiers for high-impedance AC sources such as piezoelectric sensors [ 12 ] and charged particle counters [ 13 ], [ 14 ].
Gain Analysis
Our differential amplifier comprises two parallel low-impedance input stages based on the LTC6241 (dual LTC6240) op-amp (oa1), followed by a difference stage based on the AD8617 (dual 8613) op-amp (oa2), followed by a lead gain stage based on the AD8617 op-amp (oa3) with an output high-pass filter. The LTC6241 is chosen for its excellent noise performance. The AD8617 is chosen for its good noise performance, low bias current, and rail-to-rail operation.
The amplifier input can be interpreted as either or , which are related by The internal high-pass charge-to-voltage gain is then given by which is independent of parasitics; note that the internal current-to-voltage gain is . The overall gain of the amplifier is given by For the component values in Figure 5 , the highest high-pass cut-on frequency is set at 1000 rad/s (160 Hz) to filter out low-frequency body noise. The high-end low-pass cut-off frequency is set well above the audio range by the op-amp dynamics, and so is not modeled here. Finally, the mid-band gain is given by .
Noise analysis
There are five significant noise sources in the amplifier: Johnson noise from and , voltage noise and current noise from oa1, and voltage noise from oa2. Johnson noise is treated here as a parallel current source. Being in parallel with the input current, contributes an input-referred current variance density of . Together, the two contribute the same input-referred current variance density as would a single in parallel with the input current, or . Thus, the total input-referred current variance density associated with Johnson noise is
The noise voltages at and effectively produce an input-referred noise current determined by the impedances of the sensor and the oa1 feedback network. While the noise voltages are not completely frequency independent, flicker noise for the LTC6240 is negligible above 100 Hz. Thus, the noise voltages are modeled here as white noise sources. Recognizing that the overall amplifier will reject common-mode voltage noise, define and where and are the noise voltages at and , respectively. Then, drives the internal voltage with corresponding input-referred noise current Finally substitution of ( 6 ) and ( 7 ) into ( 9 ), and recognition that the two op-amp noise voltages are independent, yields as the corresponding input-referred current variance density, where is the voltage variance density of oal.
Each noise current of the LTC6240 can be modeled using where and are both constants [ 15 ]. It is further assumed that and are all independent; the correlation between op amp current noise and voltage noise is unspecified in [ 15 ]. When two op amps are used to construct a differential amplifier that rejects common-mode current noise, the resulting input-referred current variance density becomes Finally, the input-referred current variance density resulting from the difference stage may be expressed as
The total input-referred current variance density is obtained by summing ( 4 ), ( 10 ), ( 12 ) and ( 13 ). Dividing by gives the input-referred charge variance density Finally, expanding and , and collecting terms, gives where From this point forward, is referred to as the equivalent noise charge (ENC) density.
Practical component selection
Important design guidelines can be extracted from ( 14 ) and ( 15 ). Parasitic leakage conductance and capacitance are universally bad from a noise perspective, and should be minimized for any given sensor design. Minimizing parasitic capacitance is especially important, as the term in ( 14 ) is a significant part of the amplifier noise floor. Furthermore, the ratio of to is effectively the voltage gain of the first stage; should be several times smaller than to minimize the second-stage contribution to the noise floor. We have built working prototypes with up to 10 pF, but those with work quite well. Since the differential charge amplifier requires good matching between the two input stages to achieve an acceptable common-mode rejection ratio, we use PCB capacitors to implement . By using a four-layer PCB and building the capacitors between the bottom two layers, we can implement each capacitor in a 3 mm × 3 mm area with good matching and shielding.
The value of requires more care. Ideally, should be as large as possible but increased gives worse bias stability. We observed that increasing beyond 10 GΩ does not yield significant performance benefits.
The centerpiece of the amplifier is the low-noise op amp used for the first stage, as this sets the absolute lower bound on the noise floor. Choosing this op amp based on ( 14 ) requires balancing and over the desired frequency range and sensor capacitance. This requirement rules out op amps with bipolar or JFET input stages because these op amps typically have unacceptably high current noise. Op amps with CMOS input stages have voltage noise several times higher than top-of-the-line JFET or bipolar op amps, but with far lower current noise. Of these, the LTC6240 appears to offer the best combination of voltage noise and current noise, with the LTC6081 and LTC6078 providing respectable performance with lower power consumption. Previous use of the LT1792, which has significantly worse current noise than the LTC6240, caused the current noise to dominate the sensor noise floor at low frequencies. See Table II for an op amp comparison.
The second-stage difference amplifier requirements are far more relaxed. The AD8617 has a noise floor of approximately , and so contributes to . Each 10 kΩ resistor contributes . The total noise contribution is therefore . Using ( 2 ) gives an input-referred white noise contribution of only , which is insignificant compared to the noise floor of the complete amplifier.
Specifications
Our amplifier has a measured gain of 19.1 V/pC over a −3 dB bandwidth of 160 Hz to 50 kHz. This comes to within 5 % of our 20 V/pC target gain and exceeds our minimum target bandwidth of 200 Hz to 20 kHz. We measured an equivalent noise charge over this target bandwidth of 30 aC (185 e − ) with no sensor attached. With one of our sensors attached, we measured the noise floor to be 62 aC (385 e − ). Figure 6 shows the transfer function of our amplifier; Figure 7 shows its noise floor while unloaded and loaded with our sensor. Note that the analytically derived noise floor closely matches the noise floor simulated using LTspice.
The principal reason for building a custom charge amplifier is the lack of commercial low-noise amplifiers available for low-capacitance sensors. Table III illustrates this by comparison. The CEC 1–328 is the highest-performing commercially available differential charge amp we could find, while the Femto HQA-15M-10T is the best available single-ended charge amp. Our amplifier outperformed both, although their datasheets did not clearly specify the test load capacitance or spectral noise density. We also found references to charge amplifiers in the literature. The singled-ended charge amp inside the ELectrostatic Dust Analyzer (ELDA) [ 14 ], [ 16 ] used an LTC6240 and performed similarly to our design, while Kelz et. al. [ 17 ] created a fully-integrated differential charge amp with excellent noise performance.
Measurement techniques
We test our sensors in fresh-frozen cadaveric human temporal bones (no chemical preservatives) and conduct all measurements inside a soundproof and electrically isolated room at the Massachusetts Eye and Ear (MEE). This allows us to take accurate measurements without background electrical, vibrational, or acoustic noise. Fresh cadaveric human temporal bones are procured through Massachusetts General Hospital.
Figure 9 shows our temporal bone test setup. A 3D-printed clamp holds the UmboMic sensor under the umbo while a transparent film of plastic seals the ear canal. An external speaker introduces a sound pressure stimulus to the ear canal – typically a sinusoidal sweep from 100 Hz to 20 kHz. A calibrated Knowles EK3103 probe-tube reference microphone measures this sound pressure stimulus, with the probe tube opening directly above the eardrum. We measure over a range of ear canal pressure, from approximately 60 dB to 100 dB SPL (in the linear range). Umbo velocity at the tympanic membrane is measured with a laser Doppler vibrometry (LDV) beam through a clear window covering the ear canal.
The noise floor is measured by taking a Fourier transform of several seconds of amplifier noise with the sensor attached. Then, the Fourier transform is smoothed and normalized to a 1/3-octave bandwidth to permit direct comparison of the noise floor and sensitivity in the same graph, as shown in Figure 10 .
We also measure EMI sensitivity by placing the UmboMic sensor inside an aluminum foil ball without the sensor touching the foil. The foil is connected to a voltage source, thus placing the UmboMic sensor inside a nearly uniform electric potential. Because our charge amplifier has a well-defined charge-to-voltage gain, we can accurately compute the “EMI capacitance” of the UmboMic sensor, namely the charge induced by an external potential, and hence . | Acknowledgements
Kurt Broderick’s (MIT.nano) and Dave Terry’s (MIT.nano) expertise were instrumental in designing the UmboMic’s fabrication process. Many thanks to Yew Song Cheng (MEE, UCSF) for helping carry out temporal bone experiments at Mass. Eye and Ear.
Submitted December 2023. This paper is partially supported by NIH Grant R01 DC016874, NSF GRFP Grant 1745302, NSF GRFP Grant 2141064, a grant from the Cloëtta Foundation, Zürich Switzerland, and the Research Fund of the University of Basel, Switzerland. | CC BY | no | 2024-01-16 23:35:07 | ArXiv. 2023 Dec 22;:arXiv:2312.14339v1 | oa_package/63/f5/PMC10775356.tar.gz |
||
PMC10775357 | 38196649 | Background
Structural and systemic factors are central to ongoing racial and socioeconomic inequities in the United States 1 – 3 . Residential location is one such structural factor that influences a range of social, economic, and health-related outcomes. Increasing attention is being given to understanding the influence of residential location on a range of health and health care outcomes. However, many methods used to study historic or current neighborhood characteristics fail to fully capture the dynamic aspects of how neighborhoods influence health-related outcomes 4 , 5 . For example, wealthy neighborhoods in present day may have been historically affluent, accumulating wealth over time, or due to a recent transfer of wealth as the result investment, development, and the displacement of poorer residents in these communities. Similarly, the racial composition of neighborhoods has been historically shaped by explicit public policies and private practices of landlords, realtors, lending companies 6 , 7 . Some current neighborhood compositions are the results of the residual effects of legalized residential segregation while others were altered through gentrification, a neighborhood change processes which tends to displace the current residents that often are people of color and replace them with wealthier and/or White populations 8 , 9 . These dynamic aspects of communities shape the structure, systems, and interpersonal interactions beyond either the historic or current composition in isolation. To study these dynamics, we developed an approach called “Neighborhood Trajectories” to facilitate our understanding of how changing neighborhood environmental characteristics may influence current realities.
Neighborhood Trajectories combine historic attributes with current indices into classifications that capture the lasting or changing make-up of a community. Our starting point was historic maps from the 1930’s and 1940’s of urban neighborhoods across the United States (U.S.) that capture grading commonly known as “redlining.” Redlining originated with The Home Owners’ Loan Act of 1933 with the primary goal of providing government-backed residential mortgages to boost home ownership during the Great Depression. 10 The Home Owners’ Loan Corporation (HOLC) then graded neighborhoods in hundreds of cities across the U.S. based on perceived risks of mortgage loan defaults. 11 In addition to the general environmental and economic conditions of neighborhoods, one of the key factors in determining neighborhood risk was the presence of “undesirable” inhabitants, African Americans, or foreign-born individuals. The legacy of redlining lingers, influencing both racial and socioeconomic makeup of communities in present day and shaping structural racism in place. 10 – 12
The practice of redlining, however, did not arise from a vacuum nor has its legacy been fixed in time. Ongoing policies and systems have contributed to the evolution of neighborhood socioeconomic and racial composition and characteristics, including the perpetuation of all-white “sundown” towns, the use of restrictive covenants, the development of the interstate highway system, the gentrification of communities, and the selective investment in or displacement of populations 13 – 17 . This interplay between historic foundations and the ongoing evolution of policies and practices has created neighborhoods with discreet socioeconomic, housing, and transportation characteristics.
One popular measure of current socioeconomic conditions is the Area Deprivation Index (ADI) 18 . The ADI is a composite measure of 17 different U.S. census variables at the level of census block group. These variables include measures of poverty and wealth, education, employment, housing quality, and housing composition 19 .
The primary objective of this project was to develop a neighborhood classification system (“Neighborhood Trajectories”) that captured both historic redlining as well as current socioeconomic conditions, represented by the ADI. We hypothesized that while many neighborhoods would maintain similar characteristics over time, we would also be able to capture specific locations where socioeconomic conditions may have improved, declined, or remained stable. Furthermore, we hypothesize different Neighborhood Trajectories will have distinct socioeconomic and demographic compositions.
This paper describes the methods and results of the creation of Neighborhood Trajectories using historic HOLC redlining maps and current socioeconomic characteristics available through U.S. census data. This is an adaptable method enabling researchers to choose different socioeconomic endpoints to pursue and develop study-specific Neighborhood Trajectories as a way of describing and capturing neighborhood changes over time. Specific socioeconomic or demographic measures could be used depending on the policy, practice, or evaluation of neighborhood characteristics being evaluated for a given place and time period. We applied this method to describe the Neighborhood Trajectory regional differences and variation of residential redlining to current socioeconomic deprivation between U.S. racial composition of Non-Hispanic/Latino Black residents and Non-Hispanic/Latino White residents. | Methods
Block Groups
Our study area was the contiguous United States. We selected the U.S. Census block group as our areal unit of analysis for defining a neighborhood for several reasons. Block groups have relatively small geographic areas with a population range of approximately 600–3000 and it is the unit used by the Area Deprivation Index (ADI). In addition, using the smaller block group polygons rather than a census tract or county, allowed us to capture the areas graded under the HOLC system more precisely.
We obtained block group level population and ethnoracial composition, data from the 2020 decadal U.S. Census and the polygons from IPUMS National Historical Geographic Information System 20 .
HOLC Grades
The Mapping Inequality: Redlining in New Deal America project digitized HOLC neighborhoods and made the resulting shape files available for download on their website ( https://dsl.richmond.edu/panorama/redlining ). 21 We intersected these HOLC neighborhood polygons with the 2020 U.S. Census block group polygons for the entire nation 20 . Most of the block groups were outside of the cities with digitized HOLC polygons and these were removed from our study area. Within the cities in the HOLC program, we wanted to avoid assigning HOLC grades to areas that were developed after the HOLC maps, so we also removed any block group that had less than 50% overlap with a HOLC neighborhood. For the remaining block groups, we assigned a HOLC grade based on the relative proportion of the graded areas in each block group. To do this we assigned a value to each grade: 1 for grade A-Best, 2 for grade B-Still Desirable, 3 for grade C-Definitely Declining and 4 for grade D-Hazardous. We then multiplied the proportion of the graded area in each block group by the assigned value and then summed the products and rounded to the nearest integer which was compared to the allotted values and converted to a HOLC grade for each remaining block group ( Fig. 1 ).
Area Deprivation Index
The Area Deprivation Index (ADI) is available as national percentiles or state-level deciles. 18 , 22 For our application, we chose the state-level deciles as individual state policies may have influenced how neighborhoods changed over time. The state deciles rank block groups from 1 – Least Deprived to 10-Most Deprived based on a composite of U.S. Census characteristics. Based on the distribution of the counts of block groups in our study area (i.e., block groups that had an assigned HOLC grade), we collapsed the ADI deciles into roughly quartiles of block groups. We named these new ADI categories Least Deprived (deciles 1–2), Less Deprived (deciles 3–5), More Deprived (deciles 6–8), and Most Deprived (deciles 9–10).
Neighborhood Trajectories
The Neighborhood Trajectories allow us to describe and evaluate changes in neighborhoods from historic HOLC grades to present ADI. To create the Neighborhood Trajectories, the block groups were categorized as “Advantage Stable” for block groups with HOLC grade A-Best and B-Still Desirable and ADI categories of Less or Least Deprived; “Advantage Reduced” for HOLC grade A-Best and B-Still Desirable and ADI categories of More or Most Deprived; “Disadvantage Reduced” for HOLC grade C-Definitely Declining and D-Hazardous and ADI categories of Less or Least Deprived; and “Disadvantage Stable” for HOLC grade C-Definitely Declining and D-Hazardous and ADI categories of More or Most Deprived. | Results
There were 241,764 block groups of which 44,330 (18%) overlapped at least partially with a HOLC neighborhood and 32,646 (14%) overlapped at least 50% and therefore met our inclusion criteria. An ADI decile was not assigned to 502 of these block groups due to low population counts and/or high populations residing in group quarters (e.g., dormitories, prisons) 23 , leaving 32,144 (13%) block groups with both an ADI decile and HOLC grade ( Fig. 2 ) in 201 cities across the United States.
Grade C- Definitely Declining had the most block groups (14,880; 46%) while Grade A – Best had the least (1,838; 6%). The ADI Groups ranged between 7,921 (Most Deprived) to 8,338 (Less Deprived) block groups. The largest Neighborhood Trajectories were Disadvantage Stable (12,134; 38%) and Disadvantage Reduced (10,750; 33%) with Advantage Stable (5,535; 17%) and Advantage Reduced (3,725; 12%) representing the Neighborhood Trajectories with the fewest number of block groups ( Fig. 2 ).
The flow of neighborhoods from the 1930’s-40’s HOLC grading system to the contemporary ADI group via the Neighborhood Trajectories is shown in Fig. 3 . HOLC grades B-D are roughly equally split between stable and reduced trajectories. However, 78% of Grade A remained Advantage Stable while only 22% are on the Advantage Reduced trajectory. These splits are not consistent across geographic regions ( Fig. 4 ). Overall, HOLC Grade C has 47% of its block groups in the Disadvantage Reduced trajectory and 53% Disadvantage Stable. The proportions are reversed for the Northeast (the region with the most block groups in the study) with 56% Disadvantage Reduced and 44% Disadvantage Stable whereas the neighboring Midwest has only 32% Disadvantage Reduced and 68% remained disadvantaged.
The population living in the study area in 2020 was distributed among the 4 Neighborhood Trajectories similarly to the block group count ( Table 1 ) with Disadvantage Stable containing 38% of the block groups and 36% of the population while the smallest trajectory was Advantage Reduced with 12% of the block groups and 11% of the population. The racial composition of the Neighborhood Trajectories varied from 62% Non-Hispanic/Latino White and 10% Non-Hispanic/Latino Black in Advantage Stable to more similar proportions in Advantage Reduced of 37% Non-Hispanic/Latino White and 31% Non-Hispanic/Latino Black. Variation in block group racial composition differed within Neighborhood Trajectories, notably within the Advantage Stable and Disadvantage Reduced trajectories ( Fig. 5 ). | Discussion
Structural inequity and racism remain major driving forces behind health inequities yet our ability to capture or measure structural inequity has been challenging. Here we describe one method that captures the dynamic legacy of housing policy. The Neighborhood Trajectories evaluate the influence of the historic policies and practices of residential redlining in the context of ongoing marginalization or development of neighborhoods. Using HOLC maps and current U.S. census bureau data, we established Neighborhood Trajectories for 32,144 block groups across 201 cities in the United States. Of these, most block groups had a trajectory of Disadvantage Stable (38%) or Disadvantage Reduced (33%). However, there was significant geographic variation with the Northeast having a greater proportion of block groups with Disadvantage Reduced compared to the Midwest where the majority of historic disadvantage remained stable. Additionally, we noted distinct patterns of racial/ethnic demographics between each of the four categories, demonstrating how using either historical or current data alone may have failed to capture the unique aspects between block groups. As demonstrated in Fig. 5 , for instance, the proportion of White residents in Advantage Stable block groups is much higher than in Disadvantage Reduced, despite each Trajectory having similar current measures of Area Deprivation Index.
Neighborhood Trajectories expand approaches to understanding structural and historic inequalities in the United States. Considering historic features alone as the measure of structural inequity fails to capture the dynamic aspects of ever-evolving policies, practices, and communities. In the context of civil rights in America, historians have described fixed historic factors as having vampiric qualities which “exists outside of time and history, beyond the processes of life and death, [as well as] change and development.” 4 The Neighborhood Trajectories developed here aim to better classify communities as shaped by both historic factors and the intervening, dynamic changes that happen since that time. As such, our Neighborhood Trajectories used HOLC maps and current census data at the level of the census block group. However, a similar approach could just as easily be used to evaluate policies, practices, or systems, such as evolving environmental regulation or the development of the interstate highway system 24 – 26 , for example.
Prior work has established a strong association between residential redlining and current outcomes. This includes redlined areas to be associated with increased likelihood of health conditions or access to health care, decreased access to healthy food, and increased exposure to pollution. Additional, extensive work has similarly shown that current neighborhood characteristics are associated with shorter life expectancy, worse outcomes from health care, and worse pedestrian safety. 19 , 27 – 29 The Neighborhood Trajectory builds on this literature by creating a tool for which dynamic processes and policies that shape current neighborhoods and urban landscapes may be further quantitatively analyzed. Two of the primary challenges of evaluating residential redlining are 1) projecting neighborhood maps that predate present day administrative units (census and municipal) onto current neighborhoods and 2) accounting for or measuring dynamic changes over time. Here we provide one method that bridges HOLC maps with current census boundaries while maintaining fidelity of the original landscape. Prior efforts have tried to translate HOLC grading at the level of census tracts, although this paradigm fails to capture neighborhood heterogeneity at levels smaller than census tract. 30 , 31 Similarly, there are a considerable number of census block groups that overlap with different graded HOLC neighborhoods or with varying degree of areas that are ungraded. We present this method as an approach to use as much information as possible from HOLC maps while avoiding over-attribution of grading to block groups with little area that was graded in HOLC maps. Consequently, we found that 82.2% of HOLC graded areas were captured with Neighborhood Trajectories
This development and use of the Neighborhood Trajectory should be considered in terms of its limitations. First, we only included cities where the Mapping Inequality Redlining in New Deal America 21 project provided digitized HOLC data. We cannot account for changes that occurred in other cities. Similarly, Neighborhood Trajectories cannot account for socioeconomic and demographic shifts that may have occurred in the unmapped peripheral portions or suburbs of these cities where a considerable degree of additional policies and practices have shaped segregation in the United States, including restrictive covenants. 13 – 17 , 32
Neighborhood Trajectories describe the area in which people reside, but they do not necessarily describe all residents of an area and they do not track the residents over time who may move into or out of the neighborhood. Likewise, the Neighborhood Trajectories capture the endpoints of historic redlining and current socioeconomic conditions in neighborhoods but do not explain what occurred during the intervening decades. Others have used U.S. Decennial Census data from 1970 to 2010 to categorize the temporal changes in neighborhoods 33 , 34 , which allow for a more nuanced analysis, albeit over a shorter time period. Additionally, residential redlining does not capture the full extent of structural racism in the U.S. as there are varying degrees of additional oppressive or segregated pressures including restrictive covenants or sundown towns that shape the present landscape and health. 35 Similarly, redlining maps did not have uniform impact on communities across the United States. For instance, some residents of redlined areas were prevented from obtaining mortgages at all while other cities had mortgages available for Black residents but restricted the mortgages to properties within redlined areas. 15
While representing changes from 1930’s to present socioeconomic status, this method does not capture specific or individual policies or practices that could have occurred in neighborhoods over time. Rather, it provides a very high-level perspective of overall trends in cities across the country. Finally, while Neighborhood Trajectories may provide a rough measure of gentrification, with previously disadvantaged communities presently having low deprivation, it does not capture full spectrum of ways in which gentrification could have occurred. Some areas may have experienced equitable investment with uniform improvement of conditions for the community, while other areas may have experienced asymmetric displacement of populations or further segregation within pockets of the community. Here is where evaluation of specific community-specific dynamics will provide important, prescriptive insights to city investment, neighborhood planning, and dismantling of structural racism.
In conclusion, we present one method to capture the dynamic aspect of structural oppression and racism in the United States, from residential redlining to current socioeconomic deprivation. This includes mapping Neighborhood Trajectories for 32,144 block groups in 201 cities in the United States. We believe this method provides a novel approach to evaluating dynamic aspects of structural oppression and racism in the United States. The Neighborhood Trajectories method offers robustness for many research applications that may want to quantify and classify the changes between historic and contemporary socioeconomic to learn more about the temporal trends and impact of historic policies on current neighborhoods. | The role of historic residential redlining on health disparities is intertwined with policy changes made before and after the 1930s that influence current neighborhood characteristics and shape ongoing structural racism in the United States. We developed Neighborhood Trajectories which combine historic redlining data and the current neighborhood socioeconomic characteristics as a novel approach to studying structural racism.
Home Owners Loan Corporation (HOLC) neighborhoods for the entire U.S. were used to map the HOLC grades to the 2020 U.S. Census block group polygons based on the percentage of HOLC areas in each block group. Each block group was also assigned an Area Deprivation Index (ADI) from the Neighborhood Atlas ® . To evaluate changes in neighborhoods from historic HOLC grades to present degree of deprivation, we aggregated block groups into “Neighborhood Trajectories” using historic HOLC grades and current ADI. The Neighborhood Trajectories are “Advantage Stable”; “Advantage Reduced”; “Disadvantage Reduced”; and “Disadvantage Stable.”
Neighborhood Trajectories were established for 13.3% (32,152) of the block groups in the U.S., encompassing 38,005,799 people. Overall, the Disadvantage-Reduced trajectory had the largest population (16,307,217 people). However, the largest percentage of Non-Hispanic/Latino Black residents (34%) fell in the Advantage-Reduced trajectory, while the largest percentage of Non-Hispanic/Latino White residents (60%) fell in the Advantage-Stable trajectory.
The development of the Neighborhood Trajectories affords a more nuanced mechanism to investigate dynamic processes from historic policy, socioeconomic development, and ongoing marginalization. This adaptable methodology may enable investigation of ongoing sociopolitical processes including gentrification of neighborhoods (Disadvantage-Reduced trajectory) and “White flight” (Advantage Reduced trajectory). | Financial Support
HC provided support through the GeoSpatial Resource, a section of the Biostatistical and Bioinformatics Shared Resource at the Dartmouth Cancer Center with NCI Cancer Center Support Grant 5P30CA023108. AL and JW were supported by the National Cancer Institute of the National Institutes of Health under award number K08CA263546. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. | CC BY | no | 2024-01-16 23:35:07 | Res Sq. 2023 Dec 22;:rs.3.rs-3783331 | oa_package/1f/57/PMC10775357.tar.gz |
||
PMC10775359 | 38196752 | INTRODUCTION
The human brain undergoes notable changes during development, particularly in white matter tracts that modulate cognitive and motor functions [ 1 ]. Accurately estimating these fibers is crucial for understanding developmental patterns and detecting abnormalities. Advances in diffusion magnetic resonance imaging (dMRI) have provided unprecedented insights into the human brain microstructure. Traditional methods, such as Constrained Spherical Deconvolution (CSD) [ 2 ] and Multi-Shell Multi-Tissue Constrained Spherical Deconvolution (MSMT-CSD) [ 3 ], have been employed to reconstruct fiber orientation distribution functions (FODs) as a proxy to the underlying microstructure. These methods often require a large number of diffusion measurements and/or multiple b values, making them less feasible for uncooperative young subjects. Recently, deep learning (DL) on large datasets has allowed precise FOD estimation [ 4 , 5 , 6 ] with as few as six diffusion samples from developing brains [ 7 ].
While DL can offer significant scanning time reduction, it is particularly faced with domain-shift problems. Such shifts can be attributed to several factors, from biological differences [ 8 ] such as age or pathologies [ 1 ] to imaging variations in protocols and scanner types (brand or field strength) [ 9 ].
Data harmonization has been used for reducing variability across sites while preserving data integrity [ 10 ]. The dominant dMRI method operating at the signal level is the Rotation Invariant Spherical Harmonics (RISH) [ 11 ] that harmonizes dMRI data without model dependency but requires similar acquisition protocols and site-matched healthy controls. Deep learning techniques offer solutions to non-linear harmonization but risk overfitting and require extensive training data, potentially altering pathological information [ 12 ]. The Method of Moments (MoM) [ 13 ], which aligns diffusion-weighted imaging (DWI) features via spherical moments, stands out for its directionality preservation and independence from matched acquisition protocol or extensive training. Therefore, MoM presents a potentially beneficial approach for addressing the domain shift challenges.
Furthermore, domain adaptation (DA) methods have been used to address domain shifts in medical imaging [ 14 ]. Supervised DA, particularly fine-tuning (FT) models with pre-trained weights on source domain data, is a common method, often augmented with advanced, more targeted techniques [ 15 ]. Semi-supervised DA methods, which leverage a mix of labeled and unlabeled data, can also effectively bridge domain shifts. However, both semi-supervised and unsupervised DAs face challenges in the case of significant anatomical differences, such as those between infants and neonates, where the assumption of feature space similarity may not hold.
This paper investigates the domain shift effects in a DL method [ 7 ] for white matter FOD estimation in the newborn and baby populations. Our goal is to provide a detailed examination of the challenges associated with domain shifts, particularly age-related variations between these young cohorts. We propose possible solutions and emphasize the need for robust frameworks that can cope with the unique variability present in the developing brain. | METHODOLOGY
Data Processing
We used dMRI data from the 3 rd release of the Developing Human Connectome Project (dHCP) [ 16 ] and the Baby Connectome Project (BCP) [ 17 ]. The dHCP dataset includes 783 subjects from 20–44 post-menstrual weeks, acquired using a 3T Philips scanner and a multi-shell sequence ( b ∈ {0, 400, 1000, 2600} s/mm 2 ). After preprocessing with SHARD [ 18 ], data resolution was 1.5 3 mm 3 . White Matter and Brainstem masks from dHCP were resampled to this resolution and combined with voxels of fractional anisotropy (FA) > 0.3 to produce the final white matter (WM) mask. The BCP dataset comprises 285 subjects from 0–5 years, scanned using a 3T Siemens scanner with a different multi-shell protocol ( b ∈ {500, 1000, 1500, 2000, 2500, 3000} s/mm 2 ). Denoising and bias, motion, and distortion corrections [ 19 ] also yielded a 1.5 3 mm 3 resolution for the BCP data. The final WM mask was established using an OR operation among STAPLE [ 20 ]-generated WM mask, FA > 0.4, and voxels with FA > 0.15 and mean diffusivity (MD) > 0.0011. We also computed the mean FA value within the white matter mask of each subject to analyze the relationship between age and FA value.
Model
As illustrated in Fig. 1 , our model’s workflow is divided into three stages: initial training on a source dataset, followed by separate processes of either fine-tuning or MoM harmonization on the target dataset with varying number of subjects. During inference, harmonized data is tested using the originally trained model to evaluate the effectiveness of MoM, while the fine-tuned model is applied to the original target data to assess the improvements in FOD estimation.
Backbone Model
We employed the U-Net-like network [ 7 ] as the backbone for our experiments. Its proficiency lies in estimating accurate FODs from dMRI data with six diffusion directions with an extensive field of view (FoV) of 16 × 16 × 16, and its demonstrated accurate results when applied to dHCP newborns.
We applied the MSMT-CSD [ 3 ] using all measurements (i.e., 300 diffusion directions for dHCP and 151 directions for BCP) to generate ground-truth (GT) FODs for training and evaluation. To ensure a representative sample, subjects were randomly selected from the datasets based on the desired age range and number of subjects required by each experiment detailed in Sections 2.3 and 2.4 . For each subject, we processed the diffusion signal by selecting six optimal gradient directions, normalizing, projecting onto the spherical harmonic (SH)-basis, and cropping to 16 3 patches as in [ 7 ].
For each experiment, we trained the backbone network for 1000 epochs using the Adam optimizer [ 21 ] with an initial learning rate of 5 × 10 −5 , weight decay of 1 × 10 −3 , and batches of 35. We used a dropout of 0.1 to prevent overfitting. Model selection was based on the lowest mean squared error (MSE) between predicted and GT FODs in the validation set.
Methods for Addressing Domain Shifts
We explore two primary data harmonization and domain adaptation strategies to handle domain shifts.
Data Harmonization using Method of Moments
MoM [ 13 ] was employed to harmonize DWI data across sites by aligning the mean and variance using linear mapping functions f θ ={ α , β } ( S ) = αS + β . This approach adjusts each voxel’s DWI signal S to the reference site’s characteristics. Median images of these moments, smoothed with a Gaussian filter to mitigate artifacts, were computed from the six optimal gradient directions and used to derive the harmonization parameters α and β .
Domain Adaptation using Fine-Tuning
This process involved knowledge transfer and additional training of the model on the target domain data. We conducted fine-tuning over 100 epochs, with a reduced learning rate of 5 × 10 −6 and smaller batches of 10.
Implementation Details and Code Availability
The training and fine-tuning were performed on an NVIDIA RTX 3090 GPU. We used TensorFlow 2.11 for our DL framework and MATLAB R2022b for MoM harmonization. The code will be made publicly available upon acceptance.
Intra-Site Age-Related Evaluation
We assessed baseline performance on the dHCP and BCP datasets, selecting 100 subjects from specified age ranges (dHCP: 29.3–44.3 post-menstrual weeks; BCP: 1.5–60 postnatal months), and allocated them into training, validation, and testing sets (70/15/15). The backbone model was trained and tested on these splits, respectively for BCP and dHCP. GT consistency was evaluated by processing two mutually exclusive subsets of the full measurements with MSMT-CSD (referred to as Gold Standards, GS), as in [ 7 ].
To investigate age-related shifts within each site, we conducted age-specific training and cross-testing. The dHCP dataset was split into two age groups: [26.7, 35.0] and [40.0, 44.4] weeks, and the BCP dataset into [0.5, 11] and [ 20 , 36] months, denoted as young and old , respectively. Each group consisted of 60 subjects, split into 40/10/10 partitions for training, validation, and testing, respectively. Fine-tuning was also performed across different age groups within dHCP using 5 subjects from the corresponding target age group.
Inter-Site Experiments
To address both age-related and cross-site domain shifts, we conducted cross-testing between dHCP and BCP with respective baseline models from Section 2.3 . We also evaluated how varying subject numbers in the target training dataset (1, 2, 5, and 10 subjects) affect the performance of MoM harmonization and fine-tuning. Furthermore, an ablation study involved training a model from scratch on 10 target dataset subjects to verify performance gains beyond target set familiarity.
Evaluation Metrics
We quantitatively assessed FOD estimation accuracy using metrics as per [ 7 ]: Agreement Rate (AR) for peak count consistency, Angular Error (AE) for angular discrepancy between predicted and GT FODs, and Apparent Fiber Density (AFD) from [ 22 ] to evaluate fiber density. | RESULTS
Intra-Site Experiments
The DL metrics are first compared to the GT consistency (GS in Table 1 ) for dHCP and BCP. As previously reported in [ 23 ], single-fiber predictions show good agreement, but performance decreases with multiple fibers for both datasets. This is more pronounced in three-fiber cases, which exhibit low DL performance as also reported in [ 7 ]. We therefore not consider 3-fiber metrics in subsequent experiments.
Moving to age-specific comparisons, we observed different patterns between younger and older age groups (denoted as “y” and “o”, respectively), for dHCP and BCP datasets. Fig. 2 illustrates these differences, revealing that age-related effects in BCP are less marked compared to dHCP. For instance, the difference in single-fiber ARs and AFD between DL y→y and DL o→o is higher within dHCP than BCP, approximately 14% and 21% for AFD error, respectively.
Within each age group, we see similar stability in BCP compared to dHCP when training on young and testing on old subjects or vice versa. This consistency could be due to rapid development and white matter changes in the first months of life, as opposed to slower white matter development in later periods [ 24 ]. To validate this hypothesis, we explored the average white matter FA in our cohorts ( Fig. 3 ), which indeed shows a significant increase in dHCP but a plateau in the BCP cohort. In general, AE seems to be less prone to age effect, except when training on older dHCP subjects and testing on younger ones (DL y→y ), where there is also a more pronounced decline in AR. Finally, fine-tuning on five dHCP subjects consistently reduces error rates, especially for AFD error.
Inter-Site Experiments
Given the high number of domain shifts (scanner, protocol, age) and the low agreement of GS/DL of multiple fiber depictions ( Table 1 ), we compare the cross-site results on single fiber populations only. Inter-site performance, depicted in Fig. 5 , shows the capability of the DL model to generalize across different datasets (as shown in Fig. 4 ).
AR when tested on dHCP (as shown in Fig. 5 (a) ) displays a marked increase with fine-tuning on one and two subjects. However, increments from 2 to 5 and 5 to 10 subjects offer only marginal gains in AR. Fine-tuning exhibits an advantage over MoM particularly when transitioning from BCP to dHCP. In both directions, AE presents a slight improvement when more target subjects are incorporated into fine-tuning the model, although the extent of this improvement is not much pronounced (2–3°). Moreover, the MoM harmonization is less sensitive to the number of subjects used, both for AE and AR. As for the ablation study, where the model was trained solely on 10 target dataset subjects, revealed notably reduced performance.
In summary, the inter-site experiments show that refining the DL-based fiber estimation pipeline using few target domain subjects in fine-tuning or MoM outperforms direct cross-testing and can make the accuracy closer to direct testing in some cases such as AR in dHCP testing shown in Fig. 5 (a) . This improvement is significantly more visible for transfers from BCP to dHCP than vice versa, suggesting some distinct dynamics at play when adapting a model from a dataset with pronounced age-related shifts (dHCP) to one with more gradual changes (BCP), and less so in the reverse direction. | CONCLUSION
This work has demonstrated that even a small number of target data samples can be instrumental in overcoming domain shifts encountered in white matter fiber estimation with deep learning. Through the application of fine-tuning and to a lesser extent the MoM harmonization strategies, models have shown improved performance in estimating the FODs in developing brains in both cross-age and cross-site and age settings. Moreover, we observed that the lower variations in the microstructural development of babies compared to newborns, have a direct influence on the performance of the DL models in the cross-age experiments. Such findings highlight the importance of tailoring DL models to account for the unique developmental stages of pediatric populations. | H. Kebiri and M. Bach Cuadra — Equal contribution.
Deep learning models have shown great promise in estimating tissue microstructure from limited diffusion magnetic resonance imaging data. However, these models face domain shift challenges when test and train data are from different scanners and protocols, or when the models are applied to data with inherent variations such as the developing brains of infants and children scanned at various ages. Several techniques have been proposed to address some of these challenges, such as data harmonization or domain adaptation in the adult brain. However, those techniques remain unexplored for the estimation of fiber orientation distribution functions in the rapidly developing brains of infants. In this work, we extensively investigate the age effect and domain shift within and across two different cohorts of 201 newborns and 165 babies using the Method of Moments and fine-tuning strategies. Our results show that reduced variations in the microstructural development of babies in comparison to newborns directly impact the deep learning models’ cross-age performance. We also demonstrate that a small number of target domain samples can significantly mitigate domain shift problems.
Index Terms— | ACKNOWLEDGMENTS
We acknowledge the CIBM Center for Biomedical Imaging, a Swiss research center of excellence founded and supported by CHUV, UNIL, EPFL, UNIGE, HUG, and the Leenaards and Jeantet Foundations. This research was partly funded by the Swiss National Science Foundation (grants 182602, 215641); also by the National Institute of Neurological Disorders and Stroke and the Eunice Kennedy Shriver National Institute of Child Health and Human Development of the US National Institutes of Health (awards R01NS106030, R01NS128281, R01HD110772). We thank Hakim Ouaalam for preprocessing and parcellating BCP T2-weighted images. | CC BY | no | 2024-01-16 23:35:06 | ArXiv. 2023 Dec 22;:arXiv:2312.14773v1 | oa_package/58/f7/PMC10775359.tar.gz |
||
PMC10775360 | 38196607 | BACKGROUND
Kidney cancer is one of the ten most prevalent cancers in the United States, ranking as the sixth and ninth most common cancer in men and women, respectively ( 1 ). In 2023, it is anticipated that around 81,800 new cases of kidney cancer will be diagnosed in the United States, resulting in 14,890 deaths ( 1 ). Renal cell carcinoma (RCC) accounts for approximately 90% of all kidney cancer cases ( 2 ). While early-stage RCC patients have a better prognosis, the survival rate for advanced-stage RCC patients is dismal, with a five-year survival rate of 12%–15% only ( 1 ). One-third of RCC patients present with widespread metastasis at diagnosis, and nearly half of the patients who undergo primary tumor resection develop distant metastasis ( 3 ). Existing therapies for advanced RCC, including chemotherapy, radiotherapy, and targeted therapies such as tyrosine kinase inhibitors (TKI), mammalian target of rapamycin (mTOR) inhibitors, or vascular endothelial growth factors (VEGF)-targeted therapies are unable to provide long-term survival benefits ( 4 ). Recently, immune checkpoint inhibitors (ICI) have been approved for the treatment of advanced RCC, either alone or in combination with TKI, following promising results in large Phase III trials ( 5 – 8 ). Nonetheless, alternative therapies are necessary for patients who suffer from severe side effects, experience disease progression after an initial positive response, or fail to respond altogether to ICI ( 9 ).
Apart from immunotherapy, radiation therapy (RT) is another effective curative treatment method for cancer ( 10 ). However, different types of cancer have varying degrees of resistance to RT, with RCC being known to have relatively higher resistance compared to other cancer types ( 11 , 12 ). Cancer cells develop resistance to RT through various mechanisms, including DNA damage repair, cell cycle arrest, changes in oncogenic and tumor suppressor signaling pathways, tumor microenvironment (TME) remodeling, cancer stemness, and metabolic reprogramming ( 13 ). However, recent advancements in treatment planning, delivery techniques, immobilization strategies, image guidance, and computed tomography have substantially enhanced the effectiveness of RT. Assisted by modern computing power, single-fraction and multi-fraction stereotactic ablative radiotherapy (SABR) have achieved greater precision in delivering high-dose radiation, resulting in better treatment outcomes while minimizing treatment-related toxicities ( 14 ). Consequently, numerous clinical trials are currently investigating the effectiveness of SABR, either alone or in combination with other treatment modalities, as viable treatment options for RCC ( 12 ). However, combining SABR with agents that can override RCC’s intrinsic resistance to RT is more likely to improve therapeutic outcomes. Several studies have already demonstrated that certain therapeutic agents, including chemotherapy, can act as radiosensitizers, thereby prompting research studies combining RT with such agents ( 15 ).
Among the radiosensitizers, mTOR inhibitors such as everolimus enjoy distinct advantages over other chemotherapeutic agents since they also exert inherent antitumor and antiangiogenic properties in RCC ( 16 , 17 ). Notably, mTOR inhibitors disrupt multiple mechanisms associated with radioresistance in cancer cells, including cancer stemness, metabolic pathways, DNA damage repair pathways, and various oncogenic pathways ( 18 ). Consequently, several clinical trials investigated the efficacy of combining RT with everolimus across various cancer types, including RCC ( 19 – 23 ). While this approach demonstrated efficacy in some patients, its overall clinical significance was compromised by dose-limiting toxicities ( 19 , 24 , 25 ).
Survivin expression has also been found to be associated with RT resistance, and genetic depletion or chemical inhibition of survivin has been shown to enhance radiosensitivity across various cancer types ( 26 – 30 ). Survivin is implicated in multiple RT resistance mechanisms including DNA damage repair, cell cycle, metabolic reprogramming, and stemness ( 31 – 33 ). YM155, a small imidazolium-based molecule, effectively inhibits the expression of survivin at both mRNA and protein levels and demonstrates significant antitumor efficacy and radiosensitizing activity in numerous animal models of cancer ( 30 , 34 ). Notably, YM155 has shown the capacity to overcome resistance to mTOR inhibitors in renal and breast cancer ( 35 , 36 ). Given these observations, we postulated that YM155 would synergize with everolimus in sensitizing RCC cells to RT. Interestingly, despite being tested in numerous clinical trials, YM155 has not yet received approval for clinical use ( 37 ). The lack of success in clinical trials may be attributed to its poor pharmacokinetic stability, as indicated by studies revealing a rapid decline in YM155 levels in both serum and tumors after completing treatment ( 38 ).
Combination therapies can surmount drug resistance, but often the ensuing increase in toxicity compels the discontinuation of therapy or dose reductions ( 39 ). To address this issue, target-specific drug delivery platforms are being explored with the capacity to deliver multiple drugs concurrently to tumors ( 40 ). Previously we developed a tumor-targeted liposomal formulation that shows promise in delivering multiple drugs to tumors effectively without eliciting toxicity in animal models ( 41 , 42 ). Hence, we hypothesized that a similar tumor-targeted liposomal formulation combining everolimus with YM155 will have better efficacy and reduced systemic toxicity and will synergistically sensitize RCC tumors towards RT. The goal of this study is to determine whether this tumor-targeted liposomal formulation combining everolimus and YM155 inhibits growth in RCC tumors and at the same time sensitizes them to radiation therapy. | METHODS
Reagents
DOPC and DSPE-(PEG)2000-OMe were purchased from Avanti Polar Lipids and Nanosoft Polymers, respectively. Cholesterol was purchased from Sigma. TTP-conjugated lipopeptide was synthesized as described previously. Everolimus and YM155 were obtained from LC laboratories and MedChemExpress, respectively. Antibodies against mTOR, phospho-mTOR, p70S6K, phospho-p70S6K, survivin, ATM, PARP1, and β-actin were obtained from Cell Signaling Technology. ATR, CHk1, and Chk2 antibodies were obtained from Santa Cruz Biotechnology. CD3, CD8, and Ki67 antibodies were from Abcam, while CD45 antibody was from Biolegend.
Cell Culture
786-O cell line was obtained from American Type Culture Collection (ATCC). Renca cell line was a kind gift from Dr. John A. Copland (Mayo Clinic). No authentication of the cell lines was done by the authors. 786-O cell line was maintained in Dulbecco’s Modified Eagle Medium (DMEM) and RPMI-1640 medium was used for maintaining Renca cell lines. Both the media were supplemented with 10% FBS and 1% penicillin–streptomycin (Invitrogen) and cells were cultured at 37°C in a humidified atmosphere with 5% CO 2 . Cells from 85%–90% confluent cultures were used in the experiments.
Preparation and characterization of drug-loaded liposomes
A modified ethanol injection technique was employed to formulate the E-L, Y-L, or EY-L liposomes. Briefly, required amounts of DOPC (3.93 mg), Cholesterol (0.483 mg), DSPE-PEG(2000)-OMe (0.27 mg), and TTP-conjugated lipopeptide (0.22 mg) with everolimus (0.4 mg), and/or YM155 (0.8 mg) were dissolved in 400 μL ethanol and the solution was warmed at 65°C for 5 minutes. Subsequently, this ethanolic solution was slowly injected into 600 μL preheated milli-Q water at 65°C while continuously vortexing the mixture, resulting in the spontaneous formation of liposomes. Removal of unentrapped drugs and liposome characterization were performed as described previously ( 41 , 42 ).
In vitro cytotoxicity assay
Approximately, 5 × 10 3 786-O or Renca cells per well were seeded in 96-well plates and allowed to settle for 18–24 hours. Then, cells were treated with increasing concentrations of E-L, Y-L, and EY-L diluted in respective media and incubated for an additional 72 hours (n = 3 wells per concentration). Cell viability was determined with Celltiter 96 Aqueous One Solution Cell Proliferation Assay kit (Promega) as described previously ( 41 , 42 ).
Animals used in the study
Six- to eight-week-old SCID and Balb/c mice were obtained from in-house breeding and housed in the institutional animal facilities. All animal experiments were performed following the Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC) guidelines under protocols approved by the Mayo Clinic Institutional Animal Care and Use Committee (IACUC).
In vivo tumor regression experiment in subcutaneous Renca tumors:
The in vivo tumor regression efficacy of the drug-loaded liposomes was analyzed in syngeneic subcutaneous Renca tumors developed in Balb/c mice (n = 5 per treatment group). E-L (1.94 mg/kg E), Y-L (1.44 mg/kg Y), and EY-L (1.94 mg/kg E, 1.44 mg/kg Y) were intravenously administered twice a week for 4 weeks to mice bearing ~ 50 mm 3 tumors. Tumors were measured weekly with calipers and tumor volumes were calculated using the formula: Volume = 0.5 × a × b 2 where a and b are the longest and shortest diameter, respectively. Tumor growth curves were obtained by plotting tumor volumes against time. Finally, mice were sacrificed to harvest the tumors for immunohistochemistry.
In vivo tumor regression experiment in orthotopic Renca tumors
We further analyzed the efficacy of EY-L in syngeneic orthotopic Renca tumors developed in Balb/c mice (n = 4 for control and n = 5 for EY-L treatment group). EY-L (1.94 mg/kg E, 1.44 mg/kg Y) was intravenously administered twice a week for 4 weeks to mice bearing orthotopic Renca tumors starting after 2 weeks of implantation. Tumor growth was monitored weekly by measuring bioluminescence in an IVIS Xenogen (Perkin Elmer). Tumor growth curves were obtained by plotting fold changes in bioluminescence from initial values against time. The survival was also analyzed by monitoring the IACUC-approved endpoint for each mouse.
In vitro radiosensitivity experiments
For in vitro radiosensitivity, RCC cells were plated in 2 sets of 6 well plates and treated with PBS, E-L, Y-L, and EY-L for 48 hours. The sub-IC50 concentration of liposomes (0.01% for 786-O, 0.1% for Renca) was selected based on the results from the MTT assay to minimize cell death due to drug treatment only. One set of cells was then exposed to 2 Gy radiation at room temperature at a 3.9 Gy/min dose rate and a 160 kV tube voltage using an X-RAD 160 Irradiator (Precision X-Ray Inc., USA). Following irradiation, the cell samples were returned to a 5% CO 2 incubator. Both irradiated and non-irradiated cells were then harvested and seeded in triplicates (100 cells/well) in 12-well plates in fresh culture media without drugs and allowed to grow for 10–14 days. Then, colonies were fixed with 4% formaldehyde and stained with 0.2% Crystal Violet solution, and colonies larger than 30 μm in diameter were counted. The surviving fraction for a particular treatment group was determined by dividing the plating efficiency of the irradiated cells by the plating efficiency of the corresponding unirradiated cells.
Immunoblot analysis:
Lysates were prepared from treated cells using NP-40 lysis buffer supplemented with a protease inhibitor cocktail. Protein concentrations of the lysates were measured by Bradford assay. Equal amounts of proteins from each sample were subjected to SDS-PAGE and transferred to polyvinyl difluoride membranes followed by immunoblotting with primary antibodies and respective secondary antibodies (1:10000). Enzyme-linked chemiluminescence was used to detect antibody-reactive bands in Chemidoc MP (Bio-Rad). Blots from the same experiments were used for presentation.
In vivo radiosensitivity experiments
To evaluate the in vivo radiosensitization potential of EY-L in RCC tumors, we first developed subcutaneous 786-O xenografts by implanting 5 × 10 6 cells into the right flanks of 6–8 weeks old SCID mice. When the tumors became palpable, twice-a-week EY-L (1.94 mg/kg E, 1.44 mg/kg Y) intravenous administrations were started and continued for 3 weeks. Two doses of focused single-beam 10 Gy radiation each were administered to the tumors on days 12 and 19 for mice belonging to the radiation-only (R) and combination group (EY-L + R). Radiation was administered at 2.9 Gy/min in an XRAD-SmART instrument (225 kV, 13 mA). Additionally, a separate group of mice (R-early) received two doses of focused 10 Gy radiation on days 5 and 12. This was done to ensure that their initial average tumor volume was similar to that of the EY-L + R group at the time of the first radiation dose. Treatment was stopped after three weeks, and tumor growth was monitored for another 3 weeks.
A similar experiment was conducted using subcutaneous Renca tumors developed in syngeneic Balb/c mice. Here, we only kept the R-Early group for the radiation-only treatment group for a more stringent comparison of the combination group with the radiation-only group. Treatment was discontinued after three weeks, and tumor growth was closely monitored until an IACUC-approved endpoint was reached for each mouse. Given the distinct endpoints for each mouse, we refrained from using the tumor tissues from this particular experiment. Instead, we conducted a similar experiment with another group of tumor-bearing mice and concluded it after 21 days (i.e., two days following the final radiation dose). This allowed us to harvest tumors for immunohistochemistry analysis, focusing on potential alterations in immune-cell infiltrations within the tumor microenvironment resulting from the treatment. Here, the radiation treatments were performed on the same days (i.e., days 12 and 19) in both the combination group and the radiation-only group to keep the timeline the same between radiation and harvesting of tumors in these two groups.
Immunohistochemistry:
Tumors were harvested and fixed in neutral buffered 10% formalin at room temperature for 24 hours. Then they were embedded in paraffin and 5 μm thick sections were cut for preparing slides. Hematoxylin and eosin (H&E), Ki67 (1:1000), CD45 (1:1000), CD3 (1:1000), and CD8 (1: 1000) staining were performed in deparaffinized slides as applicable following the manufacturer’s instructions (DAB 150; Millipore). Slides were stained with stable diaminobenzidine and counterstained with hematoxylin. Finally, slides were digitized using an Aperio AT2 slide scanner (Leica) and analyzed using ImageScope software (Leica).
Immunocytochemistry:
Tumors were harvested and fresh frozen in OCT medium where applicable. Then, 5 μm thick sections were cut from these fresh frozen tumors for preparing slides. Pericentrin (1:1000) staining was performed in these fresh frozen sections. Slides were stained with Alexa-Fluor-670 conjugated secondary antibody. Finally, slides were mounted in Vectashield mounting medium containing DAPI and imaged using an LSM 780 Confocal microscope and analyzed.
Statistical analyses
Microsoft Excel and GraphPad Prism were used for data analyses. One-way ANOVA followed by Tukey’s post-hoc analysis or double-sided unpaired two-tailed t-test was utilized to determine the probability of significant differences between treatment groups where applicable. For tumor growth curves, the endpoint tumor volumes were compared for statistically significant differences among each other using a double-sided unpaired two-tailed t-test where applicable. Statistical significance was defined as p < 0.05 (*), p < 0.01 (**), p < 0.001 (***), and p < 0.0001 (****) respectively. Error bars are indicative of calculated SD values. | RESULTS
EY-L is a homogeneous, positively charged nanoformulation
The amount of lipid and drug components of the drug-loaded liposomes (E-L, Y-L, and EY-L) are reported in Supplementary Table S1 along with drug loading efficiency (DLE) and encapsulation efficiency (EE) values. The initial amounts of Everolimus and YM155 used during the preparation of liposomes were 0.4 mg and 0.8 mg per 1 mL of liposomes respectively. Everolimus, being a highly water-insoluble lipophilic drug, displayed an EE of 98.19% ± 2.13% in E-L and 96.73% ± 2.01% in EY-L due to its nearly complete incorporation in the liposome bilayer. YM155 displayed only 37.14% ± 1.70% EE in Y-L and 36.05% ± 2.35% in EY-L due to its hydrophilic nature. The DLE values for Everolimus in E-L and YM155 in Y-L were 7.42% ± 0.16% and 5.71% ± 0.26% respectively. On the other hand, The DLE values for Everolimus and YM155 in EY-L were 6.94% ± 0.14% and 5.17% ± 0.34% respectively. The EE values in dual drug-loaded liposomes (EY-L) did not show statistically significant alterations from the single drug-loaded ones, albeit they were slightly lower. Plausibly, the distinct spatial distribution of Everolimus and YM155 inside the liposomes is not affecting their individual encapsulation efficiencies. However, the DLE values of the EY-L differed more from E-L or Y-L due to the increased total weight of the EY-L liposomes containing both drugs over E-L or Y-L liposomes containing a single drug.
The average hydrodynamic size, polydispersity index (PDI), and zeta potential of E-L, Y-L, and EY-L are consolidated in Supplementary Table S2 . The hydrodynamic diameters of E-L, Y-L, and EY-L were 62.15 nm ± 0.40 nm, 67.55 nm ± 0.24 nm, and 67.15 nm ± 0.31 nm, respectively. All the liposomal formulations had an average size of less than 100 nm which is suitable for better penetration through the tumor microenvironment ( 43 ). The polydispersity indices of E-L, Y-L, and EY-L were 0.178 ± 0.015, 0.195 ± 0.007, and 0.205 ± 0.01, respectively, suggesting excellent uniformity of the liposomes. The zeta potentials of E-L, Y-L, and EY-L were 10.23 mV ± 2.4 mV, 32.7 mV ± 4 mV, and 37.5 mV ± 3.3 mV, respectively. A positive zeta potential indicates the stability of the liposomal suspension as well as stronger interaction with negatively charged cell membranes. Since all these liposomes were positively charged suggesting these formulations to be stable and efficient in cellular uptake ( 44 ).
EY-L shows a robust antiproliferative effect in RCC cells in vitro
Following characterization, we then proceeded to assess the in vitro cytotoxicities of the drug-loaded liposomal formulations in 786-O and Renca cells. Interestingly, E-L did not show significant cytotoxicity at the concentrations tested in either of the cells whereas both Y-L and EY-L showed similar cytotoxicity in both cases ( Fig. 1A – B ). 786-O cells were more sensitive towards Y-L or EY-L treatment than Renca, the IC50 values being more than tenfold less in 786-O cells (IC50 ~ 0.022% liposome) than in Renca cells (IC50 ~ 0.3% liposome). Here, 1% liposome is equivalent to ~ 4.1 μM (in E-L) or ~ 4.04 μM (in EY-L) everolimus, and ~ 6.7 μM (in Y-L) or ~ 6.51 μM (in EY-L) YM155.
EY-L demonstrates superior inhibition of mTOR and survivin over E-L and Y-L, respectively
Western blot experiments demonstrate that EY-L was superior to E-L and Y-L in inhibiting phosphorylation of p70S6K (downstream of mTOR) and survivin expression, respectively (Supplementary Fig. S1) . This suggests that Everolimus and YM155 act synergistically to augment each other’s function when combined in a single formulation. Interestingly, the same amount of Everolimus alone (as E-L) was not able to inhibit phosphorylations of p70S6K in any of the cells. Y-L was effective at reducing survivin expression in 786-O cells only, but not in Renca cells. In contrast, EY-L was equally effective in inhibiting p70S6K phosphorylation and survivin expression in both cell lines.
EY-L demonstrates a strong antitumor effect in a subcutaneous syngeneic murine RCC model
Inspired by the superior in vitro efficacy of EY-L, we proceeded to analyze the in vivo efficacy of the drug-loaded liposomes in a highly aggressive syngeneic mouse RCC model developed by subcutaneous implantation of Renca cells in immune-competent Balb/c mice. Both E-L and EY-L displayed remarkable tumor growth inhibition throughout the study, EY-L being the most effective treatment group ( Fig. 1C ). The individual tumor growth curves from this experiment are provided in Supplementary Figure S2. Interestingly, YM-155 did not show any visible tumor growth inhibition as a single liposomal formulation (Y-L) in this experiment but augmented the efficacy of everolimus when combined in the same liposomal formulation (EY-L). The H&E and Ki67 staining of the tumor sections demonstrates strong antiproliferative activity in EY-L-treated tumors ( Fig. 1D – E ).
EY-L impedes tumor growth in an orthotopic syngeneic murine RCC model
We further tested the efficacy of EY-L in an orthotopic syngeneic mouse ccRCC model developed by subcapsular implantation of luciferase-labeled Renca cells in immune-competent Balb/c mice. Since EY-L was the most effective in the previous experiment, we did not include E-L or Y-L in this experiment or further in vivo experiments. EY-L showed significant tumor growth inhibition ( Fig. 2A – B ) and enhanced median survival ( Fig. 2C ) compared to the control group in this model. The individual tumor growth curves from this experiment are provided in Supplementary Figure S3.
EY-L sensitizes RCC cells toward radiation in vitro
Since both E and Y individually had been shown to increase the sensitivity of different cancer cells toward radiation, we investigated if there is any synergistic effect of EY-L in the radiosensitization of RCC cells in vitro over E-L or Y-L by performing colony formation assay. We used both 786-O and Renca cells in this experiment. 786-O cells formed dispersed-type colonies with diffused staining whereas Renca cells formed well-defined colonies with good staining. Nonetheless, the EY-L treated group led to the lowest surviving fraction post-radiation than the other treatment groups including the control, E-L, or Y-L ( Fig. 3A – C ). The Bliss synergy scores for the radiosensitization of EY-L over E-L and Y-L were 0.81 and 0.50 for 786-O and Renca, respectively, suggesting a moderate-to-strong synergistic effect of the combination therapy.
EY-L inhibits multiple DNA damage repair mechanisms
Efficient DNA damage repair mechanisms are required to alleviate the harmful effects of radiation. These pathways are typically exploited by various cancer cells to maintain their radioresistant nature. Some of the crucial proteins involved in DNA damage repair include PARP1 (widely recognized as a first-line responder molecule in DNA damage response), ATM/Chk2 (double-stranded break repair), and ATR/Chk1 single-stranded break repair). Not surprisingly, EY-L was highly effective and in most cases was better than E-L or Y-L in reducing the expressions of these proteins, even subduing any increase post-radiation in some instances ( Fig. 3D – E ).
EY-L sensitizes RCC xenograft tumors toward radiation in vivo
Inspired by the observed results from the in vitro radiosensitivity experiments and Western Blot analysis, we then proceeded to evaluate the in vivo radiosensitivity of EY-L. We first used subcutaneous 786-O xenografts developed in SCID mice to evaluate the in vivo radiosensitization potential of EY-L in the absence of any additional effects due to the immune system. We evaluated only EY-L in this experiment since it was superior to E-L and Y-L in vitro. The experiment timeline is provided in Fig. 4A . Treatment was stopped after three weeks (treatment period), and tumor growth monitoring was continued for another 3 weeks of washout period. As it is clear from the growth curve, the starting tumor volume of the R group was higher than that of the EY-L + R group whereas they are more or less similar between R-early and EY-L + R groups ( Fig. 4B ). The individual tumor growth curves from this experiment are provided in Supplementary Figure S4 . Interestingly R (Early) group showed an initial difference from the R group due to early exposure to radiation but after 6 weeks there was no significant difference between them. EY-L + R group showed significant impedance in tumor growth compared to all other groups, suggesting the augmentation of radiation therapy by EY-L. We performed immunohistochemistry from the FFPE tumor tissues obtained after the endpoint. Interestingly, we did not see any significant difference in Ki67 staining among EY-L, R−, EY-L + R, and R (Early) groups although all of them were significantly lower than the control group ( Fig. 4C – D ). We believe this may be due to the waning of treatment-induced effects during an additional 21 days in the washout period.
EY-L sensitizes syngeneic RCC tumors toward radiation in vivo
A similar experiment was performed in subcutaneous Renca tumors developed in syngeneic Balb/c mice to assess if the immune system plays any additional role in EY-L mediated radiosensitization. Here, we only kept the R (Early) group for the radiation-only treatment group for a more stringent comparison of the efficacy of the combination group with the radiation-only group. A similar experimental timeline as the above experiment was followed ( Fig. 5A ). Treatment was stopped after 3 weeks, and tumor growth was monitored until an IACUC-approved endpoint was reached for each mouse. As anticipated, EY-L + R treatment led to a noticeable inhibition of tumor progression compared to the control, EY-L, or R (Early) groups ( Fig. 5B ). The individual tumor growth curves from this experiment are provided in Supplementary Figure S5 .
Based on our experience with the immunohistochemistry results in 786-O xenografts, we did not use the tumor tissues from the above experiment since there is a long washout period which may have reduced any therapy-induced effects in tumor tissues. Hence, we repeated the experiment in another set of tumor-bearing mice and stopped the experiment after 21 days (i.e., 2 days after the final radiation dose) to harvest tumors for immunohistochemistry to analyze any alterations in immune-cell infiltrations in the tumor microenvironment due to treatment ( Fig. 5C ). The radiation dosing schedules were kept same between R and EY-L + R (Day 12 and Day 19) in this experiment to remove any disparity in treatment-induced alterations in endpoint immunohistochemistry due to different dosing schedules and washout periods. The individual tumor growth curves from this experiment are provided in Supplementary Figure S6 . Immunohistochemistry was performed on FFPE tumor tissue sections for H&E, Ki67, CD45, CD3, and CD8 ( Fig. 5D ). The quantification of Ki67, CD45, CD3, and CD8 staining was performed as well ( Fig. 5E – H ). The EY-L + R group showed significantly lower Ki67 positivity among all the groups ( Fig. 5E ). CD45 staining was not significantly affected among the treatment groups, although the EY-L + R group showed slightly lower abundance ( Fig. 5F ). CD3 + T cells were significantly higher in both EY-L and EY-L + R treatment groups compared to the control group ( Fig. 5G ). Interestingly, CD8 + T cells in both EY-L and EY-L + R treatment groups were significantly higher than control or R groups ( Fig. 5H ). However, no significant difference was observed between the EY-L and EY-L + R groups. Nonetheless, this experiment suggests that there is some additional effect of the immune system in EY-L mediated radiosensitization of the Renca tumors.
EY-L induced mitotic catastrophe in RCC tumors which is aggravated by radiation exposure
The H&E staining of the tumor tissue sections obtained from the above experiment showed the presence of several multinucleated cells in the EY-L and EY-L + R treated tumors, the abundance being higher in the combination group ( Fig. 5D ). Giant multinucleated cells characterized by missegregated and uncondensed chromosomes are often the morphological markers of mitotic catastrophe. Radiation or other DNA-damaging treatment-induced centrosome amplification and subsequent formation of multipolar mitotic spindles are potential prerequisites of mitotic catastrophe. The pericentrin staining of fresh frozen tumor sections ( Fig. 6A ) from the above experiment showed a significant increase in pericentrin count in EY-L + R treated tumors than control or radiation-only tumors, but not EY-L treated tumors ( Fig. 6B ). However, the pericentrin/nuclei ratio, which is a closer estimate of centrosomes per cell, showed a significant increase in the EY-L + R group compared to all other groups ( Fig. 6C ) suggesting a significantly higher incidence of mitotic catastrophe in the EY-L + R group. | DISCUSSION
The primary objective of RT in radiation oncology is to hinder the proliferation of cancer cells and ultimately eliminate them. RT employs various mechanisms to achieve this, including, apoptosis, autophagy, mitotic death (or mitotic catastrophe), necrosis, and senescence ( 45 ). However, given that radiation can harm both cancerous and healthy cells, the focus of RT is to maximize the radiation dose directed at the tumor while minimizing exposure to adjacent normal cells or those in the path of the radiation. Advanced technologies employed in RT delivery such as SBRT facilitate the administration of a maximum radiation dose to the tumor while sparing healthy tissues ( 14 ).
Another strategy to enhance radiation therapy treatment outcomes involves the use of radiosensitizers for radiosensitization of cancer cells ( 15 ). Radiosensitization is a process aimed at heightening the vulnerability of cancer cells to radiation-induced damage, while simultaneously minimizing potential harm to the adjacent healthy tissues. Radiosensitizers can affect cancer cells in various ways including increasing ROS within the cancer cells, inhibiting DNA repair mechanisms, modifying the tumor microenvironment, and targeting specific molecular pathways or proteins involved in cell survival and radiation resistance ( 46 ). In recent years, there has been a substantial surge in interest regarding the use of radiosensitizers to augment the efficacy of radiotherapy. Radiosensitizers can be categorized into three main groups based on their composition: small molecules, macromolecules, and nanomaterials ( 47 ). Radiosensitizers being evaluated in various clinical trials include Cisplatin, Gemcitabine, Olaparib, Paclitaxel, Temozolomide, Cetuximab, noble metal nanoparticles, and heavy metal nanoparticles ( 47 ).
We included everolimus and YM155, inhibitors of mTOR and survivin, respectively, as radiosensitizers in the present study. The selection of this combination was partly rationalized based on the findings of a couple of previous studies demonstrating that YM155 was able to overcome resistance to mTOR inhibitors in renal and breast cancer ( 35 , 36 ). The result obtained from the tumor growth inhibition study in a subcutaneous murine RCC model further corroborated these observations ( Fig. 1 ). EY-L was effective in impeding tumor growth and enhancing survival in orthotopic tumors as well ( Fig. 2 ).
Additionally, both mTOR and survivin are implicated in cell proliferation, survival, and DNA damage response pathways, which are responsible for imparting RT resistance in cancer ( 18 , 31 – 33 ). Consequently, both mTOR inhibitors and survivin inhibitors have gained significant attention in recent years due to their potential role as radiosensitizers in cancer treatment. Several clinical trials have explored the combination of mTOR inhibitors with radiation therapy in various cancer types ( 19 – 23 ). These trials mostly aimed to assess the safety and efficacy of this combination strategy and findings from these studies suggest potential benefits. Based on the above observations, we hypothesized that simultaneously inhibiting these two pathways would augment the effect of radiation on cancer cells synergistically. Indeed, the clonogenic assay in our study showed a moderate-to-strong synergistic effect of this combination in two different RCC cell lines ( Fig. 3 ). The combination also efficiently reduced the expressions of multiple DNA damage response elements ( Fig. 3 ). Hence, it is not a surprise when the combination augmented the effects of radiation in a subcutaneous RCC xenograft model ( Fig. 4 ).
However, this xenograft model does not consider the effect of an intact immune system on the outcome of RT. RT not only exerts cytotoxic effects on tumor cells but also amplifies antitumor immunity by modifying the tumor microenvironment (TME) to elicit a potent antitumorigenic immune response ( 48 – 51 ). RT induces immunogenic cell death, resulting in the release of various cytokines and chemokines into the TME, which serve as chemoattractants facilitating the infiltration of dendritic cells (DCs) to the tumor site ( 52 ). The activation of DCs and the upregulation of cytotoxic T lymphocytes are believed to be the cause of the radiation-induced antitumorigenic immune response ( 53 , 54 ). Conversely, RT has demonstrated the ability to induce immunosuppression by promoting the infiltration of regulatory T cells (Tregs) and myeloid-derived suppressor cells (MDSCs) into the TME ( 55 – 57 ).
Everolimus, typically immunosuppressive, has been shown to increase the abundance of Tregs and MDSCs in both the TME and circulation ( 58 ). Although the tumor-targeted liposomal formulation is anticipated to reduce the systemic exposure of everolimus, its potential to elevate immunosuppressive Tregs and MDSCs in the tumor microenvironment, thereby counteracting any immune-mediated enhancement of radiation therapy, cannot be disregarded. On the contrary, survivin, released from cancer cells into the TME, serves as a modulator of the T cell response, inhibiting their proliferation and inducing a shift to a type 2 response ( 59 ). Therefore, the presence of the survivin inhibitor YM155 in EY-L is expected to mitigate the immunosuppressive effect of everolimus to some extent. Indeed, our data suggests that EY-L treatment, either alone or in combination with radiation, demonstrated slightly increased CD8 + T cell infiltration in Renca tumors ( Fig. 5 ), which may be responsible for a comparatively better antitumor response for EY-L + R treatment in Renca tumors than 786-O tumors.
Mitotic catastrophe is considered a form of cell death that occurs during or after abnormal mitosis. It is an important aspect of the cellular response to DNA damage, including damage induced by radiation ( 45 ). When this damage is severe and beyond repair, the cell may undergo mitotic catastrophe as a response. Typically, cells have mechanisms to halt the cell cycle to allow for repair in response to DNA damage. If the damage is extensive and irreparable, cells may be arrested in the G2 phase of the cell cycle. Despite the cell cycle arrest, some cells may attempt to undergo mitosis. This is problematic because the damaged DNA is often unevenly distributed between the daughter cells, leading to genomic instability. This can result in cell death or the generation of cells with abnormal chromosome numbers and structures. Mitotic catastrophe often triggers programmed cell death pathways, such as apoptosis or necrosis, as a protective mechanism to eliminate cells with severely damaged DNA and prevent the propagation of genetic abnormalities ( 60 ). This has led cancer researchers across the globe to exploit mitotic catastrophe as an attractive avenue for cancer therapy ( 61 ).
Interestingly, survivin participates in the chromosomal passenger complex and ensures accurate separation of sister chromatids and microtubule stabilization at the late stages of mitosis ( 62 ). Consequently, loss-of-function of the gene encoding survivin can lead to mitotic disturbances such as mitosis delay, chromosome displacement, and cell accumulation in prometaphase ( 63 ). RNAi-based survivin knockdown has been previously shown to induce mitotic catastrophe in multiple cancer and non-cancer cell lines ( 64 – 67 ). Additionally, Y-L downregulates Chk1 and Chk2, both of which are negative regulators of mitotic catastrophe ( 68 , 69 ). On the other hand, mTOR inhibitors alone are not known to induce mitotic catastrophe but a few studies have shown that a combination of mTOR inhibitors with other genotoxic agents such as Chk1 inhibitor and HASPIN inhibitor were able to induce mitotic catastrophe in cancer cells ( 70 , 71 ). Since YM155 (as Y-L) inhibits Chk1 ( Fig. 3 ), it is plausible that a combination of everolimus with YM155 would do the same. Indeed, our data shows that EY-L, especially in combination with radiation, induced mitotic catastrophe in RCC tumors in vivo, as illustrated by the abundance of multinucleated cells in the H&E-stained tumor sections ( Fig. 5 ) and a significant increase in pericentrin/nuclei ratio ( Fig. 6 ). | CONCLUSION
In summary, our study utilized a rational combination of an mTOR inhibitor and a survivin inhibitor in a tumor-targeted liposomal formulation to augment radiation therapy in renal cancer by inhibiting DNA damage repair and enhancing mitotic catastrophe. The combination itself showed excellent tumor growth inhibition, so, the proposed strategy is poised to act through a two-pronged assault on cancer cells: a) directly affecting tumor growth and b) sensitizing cancer cells toward radiation. While the present study is focused on renal cancer, this strategy may also be useful in other cancer indications since both everolimus and YM155 have been shown to act as radiosensitizers in a variety of cancers including lung cancer, breast cancer, prostate cancer, and glioblastoma. | Authors’ contributions
HKR developed and characterized the liposomal formulations, performed in vitro and in vivo experiments, and acquired the data. VSM optimized the liposomal formulation. RSA performed the confocal imaging studies and related analysis. NMN helped with the characterization of the liposomal formulation. SKD and EW helped in animal experiments. KP designed the study, performed in vitro and in vivo experiments, acquired and interpreted the data, and wrote the manuscript. KP and DM were responsible for acquiring the funding, the overall supervision of the work, and the final review of the manuscript. All authors have read and approved the final manuscript.
Background
Renal cell carcinoma (RCC) was historically considered to be less responsive to radiation therapy (RT) compared to other cancer indications. However, advancements in precision high-dose radiation delivery through single-fraction and multi-fraction stereotactic ablative radiotherapy (SABR) have led to better outcomes and reduced treatment-related toxicities, sparking renewed interest in using RT to treat RCC. Moreover, numerous studies have revealed that certain therapeutic agents including chemotherapies can increase the sensitivity of tumors to RT, leading to a growing interest in combining these treatments. Here, we developed a rational combination of two radiosensitizers in a tumor-targeted liposomal formulation for augmenting RT in RCC. The objective of this study is to assess the efficacy of a tumor-targeted liposomal formulation combining the mTOR inhibitor everolimus (E) with the survivin inhibitor YM155 (Y) in enhancing the sensitivity of RCC tumors to radiation.
Experimental Design:
We slightly modified our previously published tumor-targeted liposomal formulation to develop a rational combination of E and Y in a single liposomal formulation (EY-L) and assessed its efficacy in RCC cell lines in vitro and in RCC tumors in vivo. We further investigated how well EY-L sensitizes RCC cell lines and tumors toward radiation and explored the underlying mechanism of radiosensitization.
Results
EY-L outperformed the corresponding single drug-loaded formulations E-L and Y-L in terms of containing primary tumor growth and improving survival in an immunocompetent syngeneic mouse model of RCC. EY-L also exhibited significantly higher sensitization of RCC cells towards radiation in vitro than E-L and Y-L. Additionally, EY-L sensitized RCC tumors towards radiation therapy in xenograft and murine RCC models. EY-L mediated induction of mitotic catastrophe via downregulation of multiple cell cycle checkpoints and DNA damage repair pathways could be responsible for the augmentation of radiation therapy.
Conclusion
Taken together, our study demonstrated the efficacy of a strategic combination therapy in sensitizing RCC to radiation therapy via inhibition of DNA damage repair and a substantial increase in mitotic catastrophe. This combination therapy may find its use in the augmentation of radiation therapy during the treatment of RCC patients. | Acknowledgements
The authors would like to thank Brandy Edenfield and Laura Lewis-Tuffin for immunohistochemistry and assistance with the digitization of the slides respectively.
Funding
This work was supported by the Academy of Kidney Cancer Investigators Early Career Investigator award W81XWH-21-1-0678 (KP) and NIH grant CA78383 (DM).
Availability of data and materials
All data generated or analyzed during this study are included in this published article and its supplementary information files. | CC BY | no | 2024-01-16 23:35:07 | Res Sq. 2023 Dec 23;:rs.3.rs-3770403 | oa_package/32/0b/PMC10775360.tar.gz |
|
PMC10775365 | 38196659 | Introduction
Antiretroviral therapy (ART) has demonstrated remarkable efficacy in mitigating HIV/AIDS-related morbidity and mortality, particularly in resource-limited settings [ 1 , 2 ]. However, the full realization of its benefits has been impeded by the high loss to follow-up (LTFU) [ 3 ], especially within the first year of ART [ 4 ]. Newly initiated ART clients are particularly vulnerable to treatment interruptions due to a multitude of factors, including the severity of their illness, difficulties in disclosing their HIV status, and adaptation to life with HIV and ART [ 5 , 6 ]. Estimates across sub-Saharan Africa (SSA) report an average of 65% retained in care at 36 months [ 3 , 7 ]. Despite the critical importance of client retention, only a few studies have explored the costs of retaining clients in routine ART care in low- and middle-income countries (LMICs) in the SSA [ 8 – 13 ].
The Lighthouse Trust (LT), a national ART Center of Excellence in Lilongwe, Malawi, operates with the Malawi Ministry of Health (MoH) to provide HIV care, treatment, and support across Malawi [ 14 ]. In its two urban flagship clinics in Lilongwe, the LT serves more than 35,000 clients on ART: 24,000 at the Martin Preuss Centre (MPC) and 11,000 at Lighthouse (LH) [ 15 ]. Clients at all LT clinics receive the same services, including integrated care, retention support, and clinical management, using an electronic medical records system (EMRS) [ 16 ]. Since 2006, LH and MPC have implemented a client retention program, “Back-to-Care” (B2C), that traces ART clients who miss a clinic visit by ≥ 14 days by phone or a home visit. B2C plays a critical role in reaching and retaining LT clients in care [ 17 – 19 ]. B2C is also reactive, waiting for clients to miss visits before intervention. In 2016, in response to growing concern about treatment interruption during the early stages of treatment, LT also introduced the Start Safely to ART (START) program in 2020. This initiative pairs all newly initiating ART clients with Expert Client Treatment Buddies. Buddies are HIV-positive peer mentors who provide vital psychosocial support and closely monitor up to 15 clients during the first 12 months of critical ART initiation.
To fill gaps in understanding the overall cost of client retention in routine LMIC settings, the primary goal of this costing study is to conduct a comprehensive assessment and quantification of the financial cost associated with routine ART retention services at the MPC during 2021. Understanding the financial implications of proactive and reactive ART retention interventions at a large, public ART clinic in Lilongwe, Malawi, will contribute to the broader discourse regarding both retention and ART program sustainability. The findings may also help identify potential areas for cost optimization and improvements in resource allocation at the MPC and other public ART clinics in LMIC settings. | Methods
Objective
This comprehensive cost study aimed to improve the understanding of the financial and economic implications of routine proactive and reactive retention interventions for ART clients at the MPC clinic in Lilongwe, Malawi.
Setting: Lighthouse Trust (LT) Martin Preuss Center (MPC)
MPC is the largest public provider of ART services in Malawi. LT umbrella policy and practice are the same across clinics, including all retention interventions. LT staff rotate between the MPC and LH locations as needed. All client data is managed in real time using the EMRS. ART clinic visits are scheduled monthly during the first six months and then every three or six months if the patient is stable and adherent. B2C forms, including location information via phone and address details for tracing, are collected at initiation and ideally updated annually. As an indication of MPC patient volume, between April and June 2023, of 18,842 scheduled ART visits, 1798 (~ 10%) missed visits by ≥ 14 days and were referred to B2C.
Client retention programs
Proactive efforts before a visit or within 13 days of missed visits: The ART Buddy program
ART patients receive more intense, proactive retention support from ART initiation through 12 months, alongside routine B2C. Following the initiation process, newly enrolled clients receive support from Expert Client “Buddies” during their initial 12 months of care. Expert clients have ~ 15 new ARTs available for support. These buddies remind clients of their scheduled ART visits and follow up with clients immediately after a missed visit, within 1–13 days of the appointment. Buddies also updates locator forms for clients who change contact information, such as phone numbers or home locations. If clients fail to report for any scheduled visit by 14 or more days, they are referred to B2C. Expert client budget services are only provided for clients during their first year on ART, at which point clients continue in B2C only.
Reactive retention efforts after a missed visit ≥ 14 days: Back-to-care (B2C)
B2C traces clients who missed visits by ≥ 14 days in accordance with MoH policy. [ 20 ] EMRS is used to identify and refer potential LTFU clients to tracing. A dedicated team of B2C tracers manually reviewed the LTFU list to identify and correct any errors in the EMRS data, removing people who attended visits from the tracing list. Clients with completed locator forms are initially traced by phone through SMS or calls, with up to five attempts made, and if necessary, up to three home visits are attempted. In cases where clients are successfully reached, the B2C team of field tracers and/or health promoters encourages those who have missed appointments or defaulted on treatment to return to care. The B2C team also conducts semiformal interviews with clients to assess the outcomes of their ART treatment and records this information on paper-based B2C forms, which data clerks subsequently input into the EMRS. The EMRS helps determine if and when a client returns to care, allowing for the cessation of B2C client follow-up for that specific event or month.
Data collection
In adherence to the Global Health Cost Consortium Reference Case guidelines [ 21 ], we performed activity-based microcosting to assess the expenses related to routine ART retention activities at the MPC clinic in Lilongwe, Malawi. Our data encompassed both financial and economic cost estimations for all resources and activities essential for executing the routine retention intervention. Cost information was obtained from the MPC expenditure records, payroll information, and procurement records. We used routine program data to estimate the number of ART clients retained and the number of tracing events in 2021. The financial costs accounted for the direct expenses incurred in the process of retaining ART clients, whereas the economic costs took into consideration the opportunity cost linked to overhead expenses. Since the perspective of the analysis was from the LT organizational perspective (payer), we excluded costs that were not incurred by the clinic, such as medication costs, which are paid by the government, and study-specific personnel that would not be transferable to routine program implementation.
Data analysis
We categorized our cost data into two main groups: fixed costs and recurrent costs (as shown in Table 1 ). Fixed costs encompassed specific activities such as the initial training of retention personnel, a one-time motorcycle insurance premium payment, text messaging system subscriptions, and the procurement of equipment and motorcycles. These fixed expenses were incurred only at the outset of the intervention, when the equipment was expected to have a useful life of 5 years. In contrast, variable costs were essential for sustaining the intervention over time. These variable expenses were further subdivided into distinct input categories, including personnel costs, communication expenses for reaching and following up with ART clients, general office supplies, motorcycle maintenance for client tracing, fuel costs, protective gear for motorcycle riders, and overhead costs representing opportunity costs.
For equipment costs in 2021, we applied a discount rate of 3% over an assumed lifespan of 5 years. To calculate the unit cost of the proactive Buddy intervention, we divided the total expenses incurred during a two-week period by the number of new ARTs initiated in 2021. Conversely, the unit cost for the reactive B2C intervention was determined by dividing the total expenses incurred for ART retention beyond the initial two weeks by the number of tracing events.
All costs were converted from Malawi Kwacha (MWK) to US dollars using the 2021 exchange rate of 1$=825 MWK. Our analyses were conducted using Microsoft Excel (version 16.76; Microsoft, Redmond, WA). We also conducted a sensitivity analysis to assess how changes in personnel costs, a significant component of the intervention’s expenses, might impact overall costs. This analysis was prompted by the inherent challenge of distinguishing personnel expenses related to ART intervention from those associated with routine care. | Results
MPC ART clients
In 2021, there were 3,280 new ARTs initiated. Among these clients on ART, 7,588 had tracing events. All new ART clients are expected to have an initial encounter with a promoter to receive support during their first year of treatment, ensuring their continued care. Additionally, the B2C approach encompasses tracing activities for individuals who missed appointments or defaulted on their treatment.
Retention costs
The total cost of ART retention interventions at the MPC is $237,564 ( Table 2 ). The early retention buddy phase incurred a total cost of $108,504, with personnel costs being the most significant at $97,764, followed by training at $6,592. In the reactive retention B2C phase, the total cost was $129,060, with personnel expenses remaining substantial at $73,778. Overhead, fuel, and vehicle costs emerged as significant contributors, amounting to approximately $12,518, $10,427, and 9,105, respectively.
Retention cost categories
Table 3 provides a comprehensive breakdown of the fixed and variable costs associated with ART retention care at the MPC, classified into proactive Buddy and reactive B2C interventions. For the Buddy activities, the fixed (start-up) costs amounted to $9,647, representing 9% of the total cost, while the variable (recurrent) costs were significantly greater at $98,587, accounting for 91% of the total cost. The overall cost of early intervention was $108,504. In contrast, B2C incurred higher fixed (start-up) costs at $16,757, comprising 13% of the total cost, with variable (recurrent) costs of $112,303, making up 87% of the total cost. The total cost of B2C was $129,060. This breakdown offers valuable insights into the allocation of resources and the financial aspects of ART retention care at the MPC.
Per client unit cost
The unit cost of ART retention care at the MPC for the Buddies, covering care for 3,280 new clients, was $34 ( Table 4 ). In contrast, B2C, with 7,588 tracing events, yielded a lower unit cost of $17. Combining both Buddies and B2C, the overall unit cost for ART retention care at the MPC in 2021 averaged $22 per client/tracing event.
Cost drivers
Figures 1 – 2 provide an overview of the primary cost drivers for ART retention intervention in both the Buddy and B2C phases. In the proactive Buddy intervention ( Fig. 1 ), personnel costs constitute the largest portion, accounting for 86% of the total expenses, while training and protective gear costs represent 6% and 3%, respectively. Moreover, during the reactive B2C intervention, personnel costs remained substantial but decreased to 57% ( Fig. 2 ). Overhead and equipment costs become more prominent at 10% each, followed by fuel and protective gear costs, which make up 8% and 6%, respectively.
Sensitivity analysis
Considering the significant impact of personnel costs on both proactive Buddy and reactive B2C retention interventions, coupled with the ongoing trend of rising personnel expenses, the need for sensitivity analysis becomes paramount. We performed a univariate sensitivity analysis to evaluate the total and unit costs of the interventions. When adjusting for a 25% increase in personnel costs, the proactive Buddy intervention increased from $108,000 to $136,000, a unit cost increase in Buddies from $33 to $42. Similarly, the cost of B2C intervention also increased from $129,000 to $147,000, resulting in a per-tracing cost increase from $17 to $19. The findings indicate that proactive intervention is more personnel-intensive and susceptible to changes in personnel costs than B2C intervention, as evidenced by the greater increase in cost per client retained. | Discussion
In this study, we provide a comprehensive breakdown of the routine costs associated with proactive and reactive ART retention interventions at the large, public ART clinic in Lilongwe, Malawi. This cost analysis provides valuable insights into the financial aspects of an ART retention intervention conducted in a resource-constrained setting. In the proactive Buddy program, expenses totaled $108,504, with personnel costs being the largest contributor. The late retention programme, B2C, incurred a total cost of $129,060, where personnel expenses remained substantial but overhead, fuel, and vehicle expenses also played a significant role. The study highlights the critical cost drivers across retention phases, offering important information for LMIC policymakers and healthcare administrators to consider for retention service allocations and program planning.
Although this analysis was not specifically a cost-effectiveness analysis, the unit cost analysis offers insights into the drivers of effective retention interventions. For the proactive retention programme, the unit cost per client (covering 3,280 new clients) was $34, while the late retention programme had a lower unit cost of $17 per tracing event (involving 7,588 tracing episodes). Although these numbers might suggest that proactive retention is more expensive and therefore less cost-efficient, this may be misleading. Early retention initiatives such as Buddies prevent or reduce the likelihood of missed visits, keep contact information updated, and may help foster engagement in care beyond the first 12 months when Buddy supports sunset. Moreover, these findings suggest that the $17 per tracing event is likely a reasonable cost to retain clients in care, suggesting continued investment in B2C. Overall, it appears that the combination of proactive and late program retention activities may be the most cost-efficient model, averaging $22 per client/tracing event. The average retention cost, $22, may serve as a valuable benchmark for evaluating cost efficiency and informing resource allocation decisions.
Retention efforts are recognized as critical but costly aspects of quality ART programs at scale. However, retention efforts at the MPC are lower, or far lower, than those reported previously, suggesting cost efficiency. For example, a recent costing study of three HIV retention models in SSA found that improving ART retention by 25% could cost between $93 and $6518/client [ 8 ]. These retention costs pose sustainability challenges. MPC costs are more in line with lower-cost retention models, with retention efforts at $36.56 USD per client. According to a community-based tracing model in Tanzania, client tracing services had a unit cost of $47.56 USD, while support for the client returning to care was $206.77 USD. For tracing services, B2C has a lower cost than this lower impact model [ 22 ].
Retention intervention costs at the MPC should not be used to overlook pervasive and persistent funding gaps that reduce Buddy or B2C effectiveness. Although retention at LT clinics, including the MPC, is consistently more than 75% at 12 months, retention at LT clinics still falls short: an average of 63% of ART clients are retained at 24 months. This is far below the 90% retention target needed for client VLSs and epidemic control. [ 22 ]Second, current resources provide resources only for clients during their first 12 months of care, shortchanging clients who may benefit from longer-term support. Third, additional resources are needed to help update accurate location information. In 2021, 1,803 clients (29%) remained untraceable due to lack of actualized address information, preventing efforts to return clients to care. Furthermore, approximately 1,184 clients (19%) returned to the facility after the tracing list was generated and verified, leading to wasted tracing resources. Finally, gaps in B2C grow as client volume increases while funds decrease [ 23 ]. At the MPC clinic from April to June 2023, only 40% (719/1798) of potential LTFU patients were successfully identified. Additional proactive retention efforts, such as LT’s recent two-way texting system to improve early retention support [ 24 , 25 ], are needed to reduce LTFU before it happens.
Limitations
Costs for complementary retention activities were not included; costs for activities such as Expert Client escorts from HIV testing sites were ignored; optional Buddy companionship during routine ART visits occurred in year 1; and additional adherence counseling with Expert Clients occurred, as they are outside the scope of routine retention in other LMIC settings. Other complementary Lighthouse programmes that enrich the client experience are outside the scope of retention-specific activities but are likely to improve engagement in care. These initiatives include call center services designed to enhance client support and engagement, psychosocial counseling, and referral services for clients who have been exposed to GBV. This likely underestimates MPC retention costs. Moreover, the analytic approach has several limitations due to the use of routine data and funding limitations, including the focus on a single-center analysis, the assumption of linear sensitivity, the reliance on clinical records, and the absence of time-in-motion analysis to estimate the actual personnel cost. Finally, we did not cost the resources needed to return clients to care, an area for further expansion of this analysis in the future. Despite these limitations, this study provides valuable insights into the financial aspects of ART retention interventions at the MPC, emphasizing the critical role of personnel expenses and the distribution of fixed and variable costs across both the proactive and late program stages. | Conclusion
These findings significantly contribute to our understanding of the financial landscape surrounding ART retention interventions. To ensure the long-term sustainability and efficiency of the ART retention care program at the MPC, it is imperative to explore resource optimization strategies for both proactive and late retention programs while maintaining continuous cost monitoring and evaluation. To improve retention efficiency, focusing scarce retention resources on clients at the highest risk of LTFU and during time periods when the risk of LTFU is highest (e.g., ART initiation) may be advisable. Additionally, linkages between EMRSs between facilities, such as via a National Health Management Information System (HMIS), could reduce the impact of silent transfers (clients moving clinics informally) and those who receive emergency ART supplies while traveling. Overall, these results reinforce calls for healthcare policymakers and administrators to continue to advocate for retention resources to ensure the wellness of both PLHIV on ART and the overall ART program. | Introduction
Antiretroviral therapy (ART) improves the health of people living with HIV (PLHIV). However, a high loss to follow-up, particularly in the first year after ART initiation, is problematic. The financial expenses related to client retention in low- and middle-income countries (LMICs) in sub-Saharan Africa are not well understood. This study aimed to comprehensively assess and quantify the financial costs associated with routine ART retention care at Lighthouse Trust’s (LT) Martin Preuss Centre (MPC), a large, public ART clinic in Lilongwe, Malawi.
Methods
We performed activity-based microcosting using routine data to assess the expenses related to routine ART retention services at the MPC for 12 months, January-December 2021. MPC provides an “ART Buddy” from ART initiation to 12 months. The MPC’s Back-to-Care (B2C) program traces clients who miss ART visits at any time. Clients may be traced and return to care multiple times per year. We assessed client retention costs for the first 12 months of treatment with ART and conducted a sensitivity analysis.
Results
The total annual cost of ART retention interventions at the MPC was $237,564. The proactive Buddy phase incurred $108,504; personnel costs contributed $97,764. In the reactive B2C phase, the total cost was $129,060, with personnel expenses remaining substantial at $73,778. The Buddy unit cost was $34 per client. The reactive B2C intervention was $17 per tracing event. On average, the unit cost for ART retention in the first year of ART averaged $22 per client.
Conclusion
This study sheds light on the financial dimensions of ART retention interventions at the MPC of LTs. ART retention is both costly and critical for helping clients adhere to visits and remain in care. Continued investment in the human resources needed for both proactive and reactive retention efforts is critical to engaging and retaining patients on lifetime ART. | Funding:
The research reported in this publication was supported by the Fogarty International Center of the National Institutes of Health (NIH) under Award Number R21TW011658/R33TW011658 (CF). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. The authors would also like to thank the Lighthouse Trust retention program and research department for their partnership in this costing study.
Availability of data and materials:
The full data set used for this costing study is included as supplementary material. | CC BY | no | 2024-01-16 23:35:07 | Res Sq. 2023 Dec 22;:rs.3.rs-3773952 | oa_package/21/d2/PMC10775365.tar.gz |
|
PMC10775376 | 38196608 | Introduction
Hidradenitis Suppurativa (HS) is a chronic skin condition characterized by inflammatory nodules and abscesses that progress to draining sinus tracts, punctuated by periods of disease exacerbation and flaring. In addition to a decreased quality of life 1 , HS is often co-morbid with other high-risk conditions 2 including cardiovascular disease, metabolic syndrome, and diabetes mellitus and is treated with higher doses of broad immunosuppressants than is typical for other skin conditions 3 . Given the increased morbidity of both HS and the treatments associated with it, an in-depth understanding is needed of patients’ perceptions of their disease in that context. Previous studies have emphasized the crucial need for vaccination, especially in immunosuppressed patients 4 ; however, vaccine rates in patients with chronic inflammatory diseases remain low 5 . Vaccination in patients with HS is particularly important for a variety of indications including against HPV to limit progression of disease to squamous cell carcinoma 6 , Shingrex vaccine for potential herpes zoster risk with new JAK-inhibitors 7 , and Prevnar vaccine for treatment with TNF-alpha inhibitors 8 . Little is known about how patients come to make medical decisions about their HS and whether they consider their HS when making healthcare decisions especially regarding vaccination. In this study, we used systematic, in-depth qualitative interviews with patients with HS to explore patient perspectives of their disease, its impact on their personal health and its relationship to their health-care decision making process. | Methods
Study design
The qualitative approach used in this study was grounded theory. Individual semistructured, virtual interviews were performed with adults diagnosed with HS. The virtual interviews were conducted over a video conference platform. The interview questions explored patients’ perspectives on how their personal health and health care decision making are impacted by HS including the impact of HS around vaccine decision making. This study was deemed exempt (Category 2) by the Mass General Brigham IRB, and patients provided verbal consent before participating in the interview.
Study participants
Invitations to participate in the research were sent to individuals between 18–45 years of age who had an appointment for hidradenitis suppurativa at Brigham and Women’s Hospital dermatology clinics between 1/1/2021–6/30/2022. English speaking individuals between the ages of 18–45 with a diagnosis of HS for at least 1 year were included in the study. Participants less than 18 years of age, deceased, or those who opted out of research were excluded from this study.
Data collection
After careful review of literature in addition to the expertise from the investigators (MHN, APC, LPC, JSB,) an interview guide was created. Pilot testing of the interview guide was performed by stimulating 5 interviews after which feedback was considered and the guide was updated. A medical student research fellow who had received formal interview training and had no previous relationship to the study subjects conducted interviews from 10/27/2022 to 01/18/2023. Each interview was conducted virtually, in a private room, lasted 20–30 minutes and were audio-recorded. The audio-recorded interviews were then transcribed using institutionally secured Microsoft Word and edited for accuracy. Interviews with 23 subjects were necessary to achieve thematic saturation.
Data analysis
The interview script development and data analyses were completed using guidelines from the Health Brief Model 9 . Two senior investigators (MHN, APC) reviewed the five initial transcripts and created a codebook to identify major themes from the transcribed data in addition to the investigator’s baseline clinical knowledge. Each of the interview transcripts was independently coded in NVIVO by a medical student research fellow and research assistant (NA, MA). Each round of coding was followed by a group discussion and discrepancies were resolved by consensus. After major themes were identified, code frequencies were calculated and representative quotations from the interview scripts were extrapolated for each theme. This study adhered to the reporting guidelines in accordance with the Consolidated Criteria for Reporting Qualitative Research 10 . | Results
A total of 23 study participants with hidradenitis suppurativa were interviewed in this study in which 21 participants (91%) were female and 2 participants (9%) were male. The mean age of participants was 31.2 (SD: 6.9, range: 19–43) ( Table 1 ).
Impact of HS on Personal Health
Most participants described HS as posing a significant impediment to their physical health and wellness by imposing painful or inconvenient limitations ( Table 2 ). For example, one participant explained that “there have been times where I’ve wanted to exercise, or in the summer I wanted to go swimming, however, if I was having a really bad HS flare, then I wouldn’t want to do those things because it’s too painful.” Many patients also described how HS impacted their mental health. One participant stated, “It impacts my mental health for sure because it’s very painful [...]and every time I’m in that situation, I just want to cry because I feel powerless.” Additionally, patients described experiences that negatively impacted their mental/emotional health due to poor self-esteem, shame, lack of control over body, fear, and changes in sexual health. One participant described “it really affects my body image and self-image. It took me a long time to come to terms with the fact that HS is not my fault.”
Many patients described how living with HS not only imposed limits both physically and psychology as outlined above but also increased their awareness of their own health generally. Patients described being more aware of their diet and exercise habits as they related to HS flaring, being aware of medication changes as they might relate to disease, being more vigilant about how they chose their physicians, and being more deliberate about family planning. For example, one participant described changing their diet to prevent a flare up: “ I’m trying to see if there’s a connection to what I eat or what I drink or the movements that I’m making and how I feel.” One participant explained that “I have been wanting to conceive but I am limited by a lot of medications and treatments for HS.”
Impact of HS on Medical Decision Making:
When asked about what resources are used to make healthcare decisions, participants expressed getting the opinion of their physicians, pharmacists, families/colleagues as well as their peers for information regarding health care decisions. One participant explained that their “ primary care physician or [their] dermatologist are the two doctors that [they] go [to] for any kind of medication-related decision .” The internet including social media, internet forums, governmental-sponsored websites and dermatology-specific websites, like the HS Foundation, were other resources identified by participants.
Participants in this study were asked how HS impacted their decisions regarding vaccines and the responses were varied ( Table 3 ). One participant mentioned being more cognizant of mild side effects of vaccines due to their already immunocompromised state from HS medications. The participant explained “I don’t want [vaccines] to interact with the Humira, which is helping HS, but it’s making my entire body weaker.” Other patients described that having HS makes them more likely to get a vaccine. “I know that my immunity is compromised, and so, I am much more aware of and likely to take vaccines than a typical healthy person.” Furthermore, some patients expressed that HS does not impact decisions made about vaccines.
When asked about the HPV vaccine, 74% of participants reported receiving the vaccine, 17% did not receive the HPV vaccine and 9% were unsure. Upon further exploring patients’ knowledge about the HPV vaccine, some expressed they were aware that the vaccine can “ prevent HPV and cancer” while others described “I honestly don’t know that much about it [HPV vaccine]. My mom had me do it when I was a kid.” | Discussion
The patient perspectives explored in this qualitative study revealed that HS has a significant impact on multiple aspects of a patient’s perceived personal health which influences their decisions around their medical care. HS patients describe considering their skin condition when deciding which medications to take, which foods to eat, and which medical providers to seek. Female participants also explained their hesitation around becoming pregnant due to the fear of disease flaring, medication side effects on pregnancy and breastfeeding due to the location of lesions around the nipples. Patients cited HS flare avoidance as a driver of their healthcare decisions regarding the above choices with respect to food, medication, and family planning.
Additionally, these same perspectives are reflected in HS patients’ choices around vaccines. Participants discussed how HS influences their attitudes around treatment-related immunosuppression when making medical decisions, including about immunization. HS patients have previously reported concerns such as fear of long-term consequences and side effects specifically about the COVID-19 vaccine. 11 Some patients are more cautious of their HS because of the immunosuppression associated with its treatment, making them more likely to receive any vaccine, whereas others expressed concerns about certain side effects or worsening of chronic illnesses, including HS. Patients may hesitate around receiving vaccines due to their fear of causing an HS flare up which in turn affects their personal health due to the significant physical limitations and pain. Moreover, the treatment for HS with immunosuppressants such as Humira causes apprehension around vaccinations as patients fear vaccines may interact with their treatment and result in exacerbation of their condition. For these reasons, some patients avoid vaccines and are therefore at a higher risk of contracting vaccine preventable diseases.
Sources for information on vaccination and other healthcare related information was varied but healthcare providers were a common source for patients. However, because many HS patients live on average of 7–10 years with the condition before a diagnosis is made, their trust in the medical community is a barrier to understanding their condition. 12 The decreased trust in the primary source of health information for HS patients can lead to a lack of adherence to immunizations, and medical advice further hindering their health care. It is important for medical providers to build trust and increase conversations around vaccines with their patients to address any concerns or misconceptions, while considering contextualizing such conditions within the framework of patients’ HS.
While this qualitative study expands on the existing literature, there are also some limitations. Participants in this study were not classified based on disease severity and therefore we are not able to differentiate how health-related quality of life is related to disease severity. Additionally, we interviewed participants from only one academic center in Massachusetts, a state with easily accessible state-run insurance and where most patients are therefore insured; therefore, the responses may not represent the experiences of unique patient populations across the United States. However, learning any patient perspective provides invaluable information to clinicians. | Conclusion
In conclusion, this qualitative study found that HS has a significant impact a patient’s perception of their personal health and how this affects their medical decision making as well as their decisions around vaccines. These findings are important because they provide more detailed insight about the individual patient experience of living with a chronic disease which can help physicians provide individualized, patient-centered education and treatment. | Background:
Hidradenitis suppurativa (HS) is a chronic skin disease that causes significant burden for patients in multiple aspects of their life. However, the details regarding the impact on factors aside from skin are limited.
Objective:
We explored patient perspectives around the impact of HS on personal health and how that affects a patient’s health care decision making.
Methods:
Individual, semi-structured, virtual interviews were conducted with adults that have HS by a trained medical student. The interviews were performed over a private, video conference platform. English speaking individuals between the ages of 18–45 with a diagnosis of HS for at least 1 year were invited to participate in the study. The transcripts were coded by the medical student and a research assistant and discrepancies were resolved by group consensus. This study followed the reporting guidelines of the Consolidated Criteria for Reporting Qualitative Research.
Results:
23 participants were interviewed in which 21 participants (91%) were female and 2 participants (9%) were male. The mean age was 31.2 years. Patients expressed an increased awareness of their personal health because of their HS, including considering HS with respect to what they ate, the medications they took, the physicians they sought, and their family planning decision. Some participants stated that HS made them more likely to receive vaccines while others described the two are unrelated.
Conclusions:
Patients with HS considered their skin disease when making medical decisions broadly. Many specifically considered their disease when making decisions regarding health maintenance and immunizations though some did not consider the two related. | Acknowledgements:
There are no acknowledgements.
Funding sources:
This study was funded by a K23 Career Development Award (K23-AR073932) from the National Institute of Arthritis, Musculoskeletal and Skin Diseases (MHN). | CC BY | no | 2024-01-16 23:35:06 | Res Sq. 2023 Dec 22;:rs.3.rs-3778510 | oa_package/a2/80/PMC10775376.tar.gz |
|
PMC10775378 | 38196646 | Introduction
The mosquito-transmitted protozoan parasite Plasmodium falciparum is the major etiological agent of human malaria, causing more than 200 million clinical cases and 500,000 deaths per year, especially in young children in sub-Saharan Africa ( 1 ).Vector control strategies such as insecticide-treated nets (ITNs) or indoor residual spraying (IRS) have been the most effective approaches for malaria control. The documented reduction in the efficacy of insecticides and anti-parasite drugs arising from the evolved resistance of mosquitoes and parasites, respectively, calls for the development of new malaria control interventions ( 2 ). The RTS, S/AS01 (RTS, S), the first-ever approved malaria vaccine, was released with a pilot program in Ghana, Kenya, and Malawi in 2019 and only demonstrates modest protective efficacy against malaria ( 3 ). During its life cycle, Plasmodium progresses through multiple developmental stages within the mosquito vector (sexual stage), before being transmitted to the human host through blood feeding. The injected Plasmodium sporozoites will migrate to the liver and invade the hepatocytes where they develop into merozoites.
The merozoites are released into the bloodstream, where they have a natural tropism for invading the red blood cells (RBCs). Within the RBC, they multiply until the cell bursts and releases merozoites that can infect other RBCs, eventually causing the clinical symptoms of malaria. It is at this point that the transition into micro-(male) and macro-(female) gametocytes occurs. The gametocytes have five distinct maturation stages: only stage V (five) gametocytes can progress through the sexual reproduction that occurs in the mosquito host ( 4 ). Once ingested by female mosquitoes with a blood meal, gametocytes undergo gametogenesis to produce male and female gametes that mate to form zygotes. The zygotes transform into the motile ookinetes which invade the mosquito midgut, all within 18–36 hours post ingestion of the infected blood meal. The ookinetes traverse the mosquito midgut epithelium and differentiate into oocysts at the midgut basal side. Upon maturation, one oocyst releases thousands of sporozoites into the mosquito’s hemolymph, eventually invading the salivary glands to complete the malaria parasite transmission cycle upon a second blood meal ( 5 ).
The complexity of the malaria parasite’s sporogonic cycle in the mosquito vector offers multiple opportunities for intervention to halt parasite transmission. Targeting parasite antigens within the mosquito serves as the basis for transmission-blocking vaccines (TBV) ( 6 , 7 ) and the development of transgenic mosquitoes expressing anti- Plasmodium molecules ( 8 ).
While a plethora of anti- Plasmodium effectors have been developed to block the parasite while it invades the mosquito midgut epithelium or translocates from the midgut to the salivary glands, the molecular targets blocking the parasites at the earlier gametocyte stages remained to be fully identified ( 9 – 13 ). In this study, we focused on developing and producing a gametocyte-stage blocker to target the early infection stages. It has already been reported that antisera isolated from immunized mice and monoclonal antibodies targeting sexual-stage antigens could successfully inhibit Plasmodium infection( 14 – 18 ). After gametocyte ingestion, Plasmodium ’s sporogonic development and malaria transmission proceed by gamete fusion, achieved by species-specific male-female gamete recognition mediated by membrane proteins on their surface. According to previous studies, only three Plasmodium proteins have a demonstrated role in this recognition process: P48/45, P47, and P230 ( 19 – 22 ). Pfs230 plays a role in male/female gamete fusion, male gamete exflagellation, and interaction with erythrocytes. Due to its large size (> 230 kDa) and complex disulfide-bonded structure, recombinant expression of full-length Pfs230 has not yet been successful, however, polyclonal antisera raised against the cysteine-rich domain 1 of Pfs230 have shown Plasmodium -blocking activity ( 23 ). Domain 1 is relatively well conserved compared to other domains of Pfs230 and has therefore become a leading malaria transmission-blocking vaccine candidate ( 22 , 23 ). Here, we used a standard immunization protocol to produce monoclonal antibodies targeting Pfs230 and identified an effective transmission-blocking clone (13G9) based on co-feeding assays with P. falciparum gametocyte cultures through a standard artificial membrane feeding assay (SMFA). The anti-Pfs230 monoclonal antibody 13G9 has shown the strongest anti- Plasmodium activity among twenty monoclonal antibody candidates tested in this study. | Materials and methods
Antigen production, immunization, and monoclonal antibody production
The Pfs230 D1M domain (SVLQSGALPSVGVDELDKIDLSYETTESGDTAVSEDSYDKYASQNTNKEYVCDFTDQLKPTESGPKVKKCEVKVNEPLIKVKIICPLKGSVEKLYDNIEYVPKKSPYVVLTKEETKLKEKLLSKLIYGLLISPTVNEKENNFKEGVIEFTLPPVVHKATVFYFICDNSKTEDDNKKGNRGIVEVYVEPYGNKING) was codon optimized according to the expression in E. coli (Fig. S1). The synthesized sequence with a 6x HIS tag on the C-terminus was cloned into a pET30a vector (EDM Millipore) and expressed in E. coli BL21 Star (DE3). Induction of recombinant protein was achieved with IPTG at 15°C for 16 hours as per standard protocols (GenScript). Cell pellets were resuspended with lysis buffer followed by sonication. The supernatant resulting from centrifugation was kept for future purification. Target proteins were dialyzed and sterilized by a 0.22μm filter before being stored in aliquots. The concentration was determined by BCA protein assay with BSA as a standard. The protein purity and molecular weight were determined by standard SDS-PAGE along with western blot confirmation (GenScript, Fig. S2).
Five mice were immunized according to standard immunization protocol ( 39 ) and 50 μl of antiserum was collected after every injection. Hybridoma cell lines (N = 20: 1E3, 1F11, 3D1, 3D6, 3F10, 3G11, 4B6, 4G8, 7A7, 9F3, 11A2, 12D9, 12E1, 12H6, 13G9,14D2, 14F11, 15A3) were produced by fusion of mouse myeloma cells SP2/0 with splenocytes from BALB/c immunized mouse S5 as for standard protocol (GenScript).
Cell lines
Hybridoma cells were thawed into a 37 °C water bath, adapted to culturing conditions, and kept in complete medium (90% DMEM + 10% FBS, Gibco), 5% CO2, 37 °C.
IgGs and monoclonal antibody isolation
IgG fractions from antisera and monoclonal antibodies from hybridoma supernatants clone 13G9, 3F10, or 14D2 were isolated using NAb TM Protein G kit (Thermo Scientific TM ) and stored in Phosphate Buffered Saline, pH 7.4 after buffer exchange with Zeba Desalt Spin columns 4 MWCO (Thermo Scientific TM ). Antibody stocks were concentrated at the desired volume with an Amicon ® ultra – centrifugal filter unit (30kDa filter Millipore-Sigma) and stored at −20 °C.
ELISAs (enzyme-linked immunosorbent assay)
The in vitro binding activity of antisera, hybridoma supernatants, and monoclonals was evaluated by ELISA. 96 well microtiter plates (Immulon 4HBX - Thermo Scientific TM ) were coated with 1μg/ml of the Pfs230 D1M domain in Carbonate –Bicarbonate buffer pH 9.6 and kept overnight at 4°C in a humidified chamber. Unbound target protein was removed by four rinses with PBST buffer and wells were blocked for 1 hr at room temperature (RT) with superblock blocking buffer PBS (Thermo Scientific TM ). Following three rinses with PBS tween 20 0.01% buffer (PBST), plates were incubated overnight in a humidified chamber with serial dilutions of antiserum from each immunized or naïve mouse, 100 μl of Hybridoma supernatant, or increasing concentrations of 13G9, 3F10, or 14D2 monoclonal antibodies in superblock blocking buffer PBS under gentle rocking. Wells lacking primary antibodies were used as a negative control. Following four rinses with PBST, wells were incubated for 1 hr at RT with 100 μL of peroxidase-affinipure goat anti-mouse IgG, Fcy fragment specific secondary antibody (Jackson Immunoresearch Laboratories) diluted 1:5000 in superblock blocking buffer PBS (Thermo Scientific TM ). Following four rinses with PBST buffer, wells were incubated with TMB substrate (Sera care) at room temperature in the dark with gentle rocking. The reaction was stopped after 20 minutes with 50 μl of stop solution (Thermo Scientific TM ) per well. Absorbance reads were immediately taken in duplicates at 450 nm with a microplate reader (Azure). Each sample was tested in at least 2 independent experiments.
Immunohistochemical staining and microscopy
Blood smears of the P. falciparum gametocytes were air-dried after methanol fixation and evaluated by IFA. After membrane permeabilization and blocking of nonspecific binding (3% BSA, 0.1% saponin in PBS for 1hr at RT) fallowed by 3 PBS washes, the preparations were then incubated individually with the hybridoma supernatants, or 10 μg/mL monoclonal antibodies from 13G9, 3F10, 14D2 clones, or IgG enriched fraction isolated from complete Hybridoma cell media in 1%BSA in PBS at room temperature for 1 hr. AlexaFluor TM 488 anti-mouse IgG Fc antibody (1:1000 dilution; Invitrogen) in PBS was used for secondary staining and detection. After 3 washes with PBS, slides were let dry and mounted with Prolong TM Gold Antifade (Invitrogen). Microscopic examination was performed the fallowing day with a Zeiss AXIO fluorescence microscope system. Each sample was tested in at least 2 independent experiments.
Mosquito rearing, Plasmodium falciparum infection and statistical analysis
Anopheles mosquitoes were maintained on a 10% sugar solution at 27°C and 70%–80% humidity and a 12-hours light/dark cycle according to standard rearing procedures. Anti- Plasmodium activity was determined by SMFA. The infectious blood meal was prepared with NF54 P. falciparum gametocyte cultures, active serum, and RBCs (provided by the Hopkins Malaria Research Institute Core Facility) ( 40 , 41 ) complemented with our experimental samples (IgG fractions from antisera or hybridoma supernatants) or PBS. After starving the adult mosquitoes for 3 to 6 hours, they were allowed to feed for 1hr on artificial membrane feeders at 37°C. Only the cohort of fully engorged blood-fed mosquitoes was selected and kept until 8 dpi for oocyst counting and infection prevalence assessment. Midguts were dissected out in phosphate-buffered saline (PBS) and stained in 0.02% PBS–buffered mercurochrome (Millipore Sigma). Oocysts were examined using a light-contrast microscope (Olympus). All experiments were repeated at least 2 times. Each biological replicate corresponds to a different mosquito population cage, and each population corresponds to a different generation. All graphs were generated using GraphPad Prism8 software, and the statistical methods used for each experiment are indicated in the respective figure legends. | Results
Mouse-antisera generated after immunization with recombinant Pfs230 D1M domain show high reactivity in vitro
Pfs230 is a 230 kDa cysteine-rich protein, originally present as a 360 kDa precursor on the gametocyte surface( 24 ). It includes 14 cysteine-rich domains (CM) and a natural protease cleavage site at position 542 ( 25 ). Previous studies have reported that high transmission-blocking activity can be achieved using the CM1 domain as an immunogenic antigen ( 23 ). In addition, analyses of polymorphisms within that region revealed only two predominant amino-acid substitutions at positions G605S and K661N, with the G605S having the highest frequency (AF 0.94) ( 23 ). A low polymorphism frequency in the targeting epitope is a desirable trait that reduces the risk of escaper mutations arising in the parasite that would impair the efficacy of the antibody. Twelve new putative missense mutations have been recently identified in the same region; however, they are based on de novo variant call data that require further validation (Fig.S3) ( 26 ).
Our selected antigen for BALB/c mice immunization comprised a 195 amino acids region from the cleavage site in position 542 through the end of the cysteine-rich domain 1, which we refer to as the Pfs230 D1M domain according to previous publications ( 23 , 26 ) ( Fig. 1A ). Test bleeds were collected after the 3rd antigen boost to assess antibody titers elicited by immunization according to standard protocols. Indirect-ELISAs (enzyme-linked immunosorbent assays) with antiserum collected from each individual mouse were used to check the antibody titrations (as illustrated in Fig. 1B ). All samples were found to be reactive at a 1:512,000 dilution with mice S4 and S5 showing the highest antibody titers ( Fig. 2A ), confirming the highly immunogenic properties of the Pfs230 D1M domain.
Mouse-antisera generated by immunization with the Pfs230 D1M domain significantly reduces oocyst loads
After assessing the immunogenic response elicited by the Pfs230 D1M domain antigen, we evaluated the anti- Plasmodium activities of all mouse-antisera through a standard membrane-feeding assay (SMFA). Immunized mice were boosted 3 times and 50 μL of antiserum from each mouse was kept after each boosting, pooled, and used to isolate the IgG fraction. The IgG fraction from each mouse was then supplied to Plasmodium falciparum gametocytes cultures blood mix (with RBC and human serum) to a final concentration of 250 μg/ml and fed to the Anopheles female mosquitoes through a membrane feeder ( Fig. 1B ). The IgG fraction isolated from pre-immunized mice was used as a negative control ( Fig. 2B , Ctl-IgG), together with the group of mosquitoes fed on the gametocytes blood mix supplied with PBS as the mock control ( Fig. 2B , Pf-only). Since the transmission-blocking activity of previously characterized Pfs230-specific antibodies was complement-dependent ( 27 ), the human serum in the blood meal was not heat-inactivated. The infectious blood meal was delivered with high gametocytaemia to achieve a strong infection prevalence and intensity that would facilitate the selection of the most effective anti- Plasmodium IgGs.
The in vitro reactivities from all immunized mice (S1-S5) were comparable ( Fig. 2A ), however, IgGs isolated from mouse S2 and S5 showed a significantly higher level of transmission-reducing activity ( Fig. 2B ). Similar to previous studies describing the anti- Plasmodium activity of Pfs230 ( 28 ), we found a prominent reduction in oocysts number in infected mosquito midguts (8-fold reduction of median oocyst load with S5 IgG, Mann-Whitney test, p < 0.0001; and a significant reduction of infection prevalence, Fisher’s exact test, p < 0.01) ( Fig. 2B , Fig. 2C ). Taking these results together, we selected mouse S5 for hybridoma production to produce monoclonal antibodies.
Hybridoma supernatant reacts to native Pfs230
We employed well-established hybridoma technology to produce monoclonal antibodies. Briefly, B lymphocytes isolated from immunized mice were fused with immortal myeloma cell lines to form the hybridoma ( 29 ). B-lymphocytes isolated from mouse S5 were used to generate 20 hybridoma cell lines (1E3, 1F11, 3D1, 3D6, 3F10, 3G11, 4B6, 4G8, 7A7, 9F3, 11A2, 12D9, 12E1, 12H6, 13G9, 14D2, 14F11, 15A3, 15F8, 15E9), each expressing a monoclonal antibody targeting the Pfs230 D1M domain. To validate whether hybridoma-produced monoclonals were interacting with the Pfs230 D1M domain, we performed in vitro indirect-ELISA ( Fig. 1B ). Secondary staining with a Fcy fragment-specific peroxidase-AffiniPure goat anti-mouse IgG revealed high immunoreactivities for all 20 undiluted hybridoma supernatants, with 450 nm OD readings ranging from 2.239 to 2.874 ( Fig. 3A ), thereby confirming that all 20 hybridomas produce monoclonal antibodies that can bind the Pfs230 D1M domain antigen.
To assess whether the antibodies could recognize the native Pfs230 protein on the surface of P. falciparum NF54 gametocytes, we performed immunofluorescence assays (IFAs) ( Fig. 1B ) with hybridoma supernatants. Staining with a secondary Fcγ fragment-specific Peroxidase-AffiniPure Goat Anti-Mouse IgG, showed that only the supernatants collected from clones 13G9, 3F10, and 14D2 bound strongly to gametocytes ( Fig. 3B ), and these were thus selected for further studies to assess their parasite-blocking potential. Clones 1E3, 1F11, 3D1, 3D6, 3G11, 4B6, 4G8, 7A7, 9F3, 11A2, 12D9, 12E1, 12H6, 14F11, 15A3, 15F8 and 15E9 displayed a reactivity comparable to the negative control with unnoticeable binding of any antibodies to the gametocytes (data not shown).
The 13G9 monoclonal antibody has a potent Plasmodium -blocking activity
Next, in order to assess the efficacy of candidate monoclonal antibodies to suppress P. falciparum infection, we first isolated IgG fractions from hybridoma’s supernatants 13G9, 3F10, 14D2 as testing groups, and IgG from complete hybridoma cell media as the negative control (Ctl-IgG) and evaluated their anti- Plasmodium activities by SMFA ( Fig. 1B ). Their reactivity to the gametocytes was again confirmed by the same assays as described above ( Fig. 1B ). Both in vitro indirect-ELISA and immunofluorescence have confirmed the specific activities of these three monoclonal antibodies ( Fig. 4 ).
Co-feeding Anopheles females 30 μg of 13G9 monoclonal antibody with P. falciparum gametocytes-infected blood meal (with a final concentration of 166 μg/ml) through a membrane feeder resulted in a prominent reduction of infection intensity ( Fig. 5A ) (2.5-fold reduction of mean oocyst load, Mann-Whitney test p = 0.0035) and infection prevalence (50% reduction, Fisher’s exact test, p = 0.0011) ( Fig. 5B ) at 8 days post-infection. Monoclonal antibodies 3F10 and 14D2 did not show any Plasmodium -blocking activity despite exhibiting binding to both the Pfs230 D1M domain and the full-length protein in vitro ( Fig. 4 ), highlighting the necessity for in vivo functional assays in addition to the in vitro reactivity assays. | Discussion
Targeting the malaria parasite in the mosquito vector using transmission-blocking vaccines is a disease control strategy that has been gaining increasing interest over the past decades due to the difficulties in eliminating malaria for the lack of an effective vaccine ( 30 ). Transmission-blocking vaccine’s (TBV) mechanism of action is based on vaccination-based immunization of the human host with a parasite or mosquito antigen which is essential for Plasmodium ’ sporogonic development in the mosquito. While such vaccines do not protect the vaccinated individual from disease, they could contribute to disease suppression at the population level since the infected individuals cannot transmit the pathogen. Three primary P. falciparum -encoded proteins, Pf s48/45, Pf s230, and Pf s25 are currently considered lead candidates for TBV development. Pf s230 is a member of the six-cysteine (6-Cys) family and is composed of fourteen 6-Cys domains forming a complex intra-domain disulfide bond structure, the site of recognition for antibodies binding to conformational epitopes. Accordingly, polyclonal antibodies raised through immunization with the whole gametocyte did not recognize the antigen in its reduced form ( 27 ). Previous studies aiming to identify the most suitable region of Pfs230 to use as an antigen showed that the N-terminal Prodomain, which lacks CM cysteine-rich domains, could also achieve this goal, suggesting that Transmission-blocking antibodies may also be directed against non-conformational epitopes ( 31 , 32 )
Extending these prior studies focused on the research of the best antigen to select and utilize in a vaccine development context ( 33 , 34 ), here we directed our attention to generating and isolating an effective monoclonal antibody targeting the early stages of parasite development within the mosquito, using the Pfs230 D1M domain previously demonstrated to elicit strong immuno-response ( 33 , 34 ). We introduced one additional level of testing to the standard pipeline of monoclonal antibody production by selecting the monoclonal antibody that displayed the highest functional parasite-blocking activity.
In a cohort of 5 mice immunized with the Pfs230 D1M domain in identical conditions, the IgG fraction isolated from the mouse S5 led to a significantly lower oocyst count and prevalence when fed with an infectious meal to Anopheles mosquitoes, confirming his selection as the best candidate in terms of both potency and effectiveness of the immune response elicited.
Alternatively to transmission blocking vaccines, new therapeutic approaches developed for malaria prevention and therapy include the use of recombinant transmission-blocking antibodies. The ability to target antigens expressed in the early sexual stages potentially allows to reduce the infection intensity within the mosquito host. Cocktails of different antibodies or bispecific molecules could serve this specific purpose and render this strategy more effective and long-lasting ( 35 ). In the recent past, panels of monoclonal antibodies targeting Pfs230 with a different range of affinity have been developed by various research groups, confirming the need to generate new reagents towards this antigen ( 36 ).
Finally, the same monoclonal antibodies developed as transmission-blocking molecules could be used in the context of malaria eradication strategies based on population modification of the vector host. Since Pfs230’s essential biological function for malaria transmission takes place in the mosquito midgut lumen after ingestion of gametocytes through an infected blood meal, the 13G9 monoclonal antibody could potentially be developed into a single-chain antibody that could be expressed and secreted into the midgut lumen through an appropriate promoter.
Transgenic mosquitoes expressing transmission-blocking molecules and able to transmit the desired traits with a super mendelian inheritance are already a reality and proven to reduce the parasite burden below the transmission level in cage trial experimental settings ( 37 ).
A combination of multiple effectors targeting Plasmodium at different developmental stages is most likely the most effective strategy to overcome the insurgence of parasite resistance due to selective pressure during host-pathogen coevolution ( 38 ). In this light, the generation and validation of a new transmission-blocking agent targeting the early stage of the parasite in addition to already established ones is useful for the goal of malaria eradication. | Authors contribution
Conceived and designed the experiments: E.C.C., G.D., and E.B.; performed the experiments: E.C.C., Y.D., and M.L.S.; analyzed the data: all authors; First draft: E.C.C. All authors have read and agreed to the published version of the manuscript.
Vector control is a crucial strategy for malaria elimination by preventing infection and reducing disease transmission. Most gains have been achieved through insecticide-treated nets (ITNs) and indoor residual spraying (IRS), but the emergence of insecticide resistance among Anopheles mosquitoes calls for new tools to be applied. Here, we present the development of a highly effective murine monoclonal antibody, targeting the N-terminal region of the Plasmodium falciparum gametocyte antigen Pfs230, that can decrease the infection prevalence by > 50% when fed to Anopheles mosquitoes with gametocytes in an artificial membrane feeding system. We used a standard mouse immunization protocol followed by protein interaction and parasite-blocking validation at three distinct stages of the monoclonal antibody development pipeline: post-immunization, post-hybridoma generation, and final validation of the monoclonal antibody. We evaluated twenty antibodies identifying one (mAb 13G9) with high Pfs230-affinity and parasite-blocking activity. This 13G9 monoclonal antibody could potentially be developed into a transmission-blocking single-chain antibody for expression in transgenic mosquitoes. | Acknowledgments
We thank all members of the Bier laboratory and Dimopoulos laboratory for constructive ideas and discussions. We thank the Johns Hopkins Malaria Research Institute Insectary and Parasitology core facilities and Bloomberg Philanthropies for their support. We also thank Dr. Sabyasachi Pradhan for providing P. falciparum gametocyte cultures.
Funding
The studies in the Bier lab were supported by The Tata Institutes for Genetics and Society - UCSD and by NIH grants R01GM117321, R01AI162911. | CC BY | no | 2024-01-16 23:35:07 | Res Sq. 2023 Dec 19;:rs.3.rs-3757253 | oa_package/b6/3d/PMC10775378.tar.gz |
||
PMC10775397 | 38196580 | Introduction
The International Space Station (ISS) is a modular spacecraft replete with stressors that challenge the bounds of human physiology. Astronauts aboard the ISS live in a tight-quarter, enclosed, near-weightless environment in low Earth orbit. Astronauts face superterrestrial levels of ionizing radiation, disruption of circadian rhythms, and encephalic fluid redistribution. 1 , 2 Because of microgravity-induced physiological changes, astronauts commonly exhibit muscle atrophy, ophthalmic disorders, serum chemistry alterations, and bone demineralization. 3 – 6 Many of these physiological changes mirror disease states on Earth, including age-related changes in telomere maintenance and hormonal perturbations. 7 , 8 As such, the microgravity environment has been proposed as a unique stressor that can help understand underlying cellular and molecular drivers of pathological changes observed in astronauts with the ultimate goals of developing strategies to enable long-term spaceflight and better treatment of diseases on Earth. 9 We used the unique environment of the ISS to evaluate the effects of microgravity on the kidney response to serum exposure and biotransformation of vitamin D.
The kidneys play an essential homeostatic function by filtering out waste products of cell metabolism. While small waste molecules are freely filtered, larger serum proteins such as albumin and immunoglobulins are efficiently retained within the circulation. However, the selectivity of the kidney filtration barrier is disrupted in several common diseases, resulting in the spillage of serum proteins into the urine (proteinuria). It is estimated that 10% of adults in the United States have elevated levels of serum-derived albumin detectable in their urine. 10 Data regarding the renal handling of filtered protein (such as albumin) in astronauts in flight is conflicting; while it was initially reported that urinary excretion of albumin was increased in astronauts in flight, follow up studies showed decreased urinary excretion of albumin. 11 – 13
Whether proteinuria can directly activate injury in kidney tubules or exacerbate disease progression is controversial. 14 Ground-based studies have shown that serum, but not its major protein component albumin, induced tubular injury and secretion of pro-inflammatory cytokines and matrix modifying enzymes, demonstrating a causal role for serum proteins in tubular injury. 15 To test whether the additional stressor of microgravity alters the pathologic response of the proximal tubule to serum exposure, we treated human proximal tubule epithelial cells (PTECs) cultured in a microphysiological device with human serum and measured biomarkers of toxicity and inflammation (KIM-1 and IL-6) and conducted global transcriptomics via RNAseq on cells undergoing flight (microgravity) and respective controls (ground).
The kidney may play an important role in bone loss in microgravity through altered metabolism of 25-hydroxy vitamin D 3 (25(OH)D 3 ) to its most biologically active form, 1α,25-dihydroxy vitamin D 3 (1α,25(OH) 2 D 3 ) or inactive degradation products such as 24R,25 dihydroxy vitamin D3 (24R,25(OH) 2 D 3 . 1α,25(OH)D 3 is important for bone homeostasis, primarily through regulation of uptake of calcium in the intestine and modulation of osteoclast number and activity. 16 Despite dietary supplementation of vitamin D 3 and plasma levels of 25(OH)D 3 remaining constant, plasma levels of 1α,25(OH) 2 D 3 in astronauts in flight decrease over time. 17 , 18 At the same time, absorption of calcium in the intestine is impaired. 17 The kidney is the primary site for bioactivation of 25(OH)D 3 to 1α,25(OH) 2 D 3 , via cytochrome P450 27B1 (CYP27B1). The kidney can also metabolize 25(OH)D 3 and 1α,25(OH) 2 D 3 to inactive products via cytochromes P450 24A1 (CYP24A1) via CYP3A5. 16 In addition, the kidney maintains the levels of 1α,25(OH) 2 D 3 through an autocrine mechanism, whereby 1α,25(OH) 2 D 3 activates the vitamin D receptor (VDR) leading to induction of CYP24A1 . Thus, microgravity could decrease plasma levels of 1α,25(OH) 2 D 3 by 1) decreasing renal CYP27B1 activity, 2) increasing renal CYP24A1 activity, or 3) increasing renal CYP3A5 activity. To test whether microgravity affects the transcript expression or activity of CYP27B1, CYP24A1, or CYP3A5, we treated proximal tubule epithelial cells (PTECs) cultured in a microphysiological device with 25(OH)D 3 and monitored metabolite formation and conducted global transcriptomics via RNAseq on cells undergoing flight (microgravity) and their control (ground). | Methods
Cell culture
Deidentified human cortical kidney samples were collected through the Northwest Biotrust at the University of Washington Medical Center with local IRB approval (UW IRB Study 1297). Primary human proximal tubule epithelial cells were isolated by mechanical and enzymatic dissociation and cultured as previously described. 38 , 39 Serum-free tubular cell cultures were maintained in PTEC maintenance media consisting of DMEM/F12 (Gibco, 11330–032) supplemented with 1x insulin-transferrin-selenium-sodium pyruvate (ITS-A, Gibco, 51300044), 50 nM hydrocortisone (Sigma, H6909), and 1x Antibiotic-Antimycotic (Gibco, 15240062). Upon reaching 70–80% confluence, PTECs were passaged by enzymatic digestion with 0.05% trypsin EDTA (Gibco, 25200056) and manual cell scraping to obtain a single-cell suspension that was subsequently neutralized with defined trypsin inhibitor (Gibco, R007100). Cells were pelleted by centrifugation at 200 × g for 6 minutes, resuspended in maintenance media, and plated in cell culture-treated flasks at > 30% confluency. PTECs were used at passage number 2–3 from all donors in these experiments.
Microphysiological devices
Triplex microfluidic devices were purchased from Nortis, Inc (Bothell, WA) and prepared as previously described. 40 Triplex microfluidic devices contain three fluidic circuits, which enables generation of three PTEC tubules on a single device that can be continuously perfused with media.
Maintenance, treatment, and fixation of devices in BioServe perfusion platform
The BioServe perfusion platform was developed to house three Triplex devices in a self-contained, hermetically sealed system to meet the levels of containment required by NASA and reduce the space required to perfuse the Triplex devices. A flow rate of 0.5 μL/min was used for cell maintenance and treatment. The treatments conditions were control (PTEC maintenance media), vitamin D, or 2% human serum. 19 To prepare 2% human serum treatment media, normal human serum (Valley Biomedical, HS1021) was diluted in PTEC maintenance media to a final concentration of 2%. Vitamin D treatment media consisted of PTEC maintenance media supplemented with 1.5 μM 25(OH)D 3 (Toronto Research Chemicals, C125700) and 3 μM DBP (Athens Research, 16–16-070307). To prepare vitamin D treatment media, stock 25(OH)D 3 was solubilized with molecular biology grade ethanol to 5 mM. DBP was reconstituted to 3 μM in PTEC maintenance media to create PTEC-DBP media. 25(OH)D 3 was then diluted into PTEC-DBP media to 1.5 μM. Vitamin D media was allowed to equilibrate at room temperature for 30 minutes prior to filling the treatment cassette to ensure binding of 25(OH)D 3 to DBP. The final concentration of ethanol in the vitamin D treatment media was 0.02%. In order to preserve the tubules at the end of treatment for analyses, the devices were fixed for 2 hours with either 10% neutral buffered formalin (Thermo, 5725) or RNALater (ThermoFisher, AM7024) at a flow rate of 10 μL/min.
RNA isolation, sequencing, and analysis
RNA was isolated from devices fixed with RNALater by injecting 100 μL of RLT lysis buffer (Qiagen, 79216) into the injection port using a 1 mL syringe outfitted with a 22-gauge needle. The cell lysate was collected at the outlet, 400 μL of RLT lysis buffer was added to each tube, and the samples were stored at −80°C until extraction. RNA was extracted using the RNeasy Micro Kit (Qiagen, 74004) and converted to cDNA with the SMART-Seq v4 Ultra Low Input RNA Kit (Takara, 634891). Sequencing libraries were constructed using the SMARTer ThruPlex DNA-Seq Kit (Takara, R400676) and sequenced on a NovaSeq 6000 instrument (Illumina, San Diego, CA). Sequencing reads were aligned to GRCh38.p12 with reference transcriptome GENCODE human release 30 (with additional ERCC spike-in sequences) using STAR (v2.6.1d).
Statistical methods and model fitting
Prior to fitting models, we excluded genes that are expressed at consistently low levels across all samples. 41 Prior to filtering we had data for 58870 genes and after filtering we had data for 14094 genes. The trimmed mean of M-values (TMM) normalization method was conducted. 42 We used the voom method from the Bioconductor limma package, which estimates the mean-variance relationship of the log-counts per million (logCPM), and generates a precision weight for each observation and enters these into the limma analysis pipeline. 43 A small positive value was added to each raw count to avoid taking the logarithm of zero, and logCPM can be interpreted as a normalized count data by the corresponding total sample counts (in millions). We used the linear mixed model approach, fitting the condition_treatment as the fixed effect and the donor as the random effect by estimating the within-donor correlation. 44 We then fit a linear model with condition_treatment and incorporated the within-donor correlation (corr = 0.3). Since not all donors received all the treatments under each condition, the mixed model approach provides more statistical power for the unbalanced design. Both observation level and sample-specific weights were used, which enabled up or down-weighting of individual samples. This allowed us to keep all samples in the analysis and minimized the need to make decisions about removing possible outlier samples from consideration. The approach of using observation level and sample-specific weights has been shown to increase power in both real and simulated studies. 45
We selected genes based on a 1.1-fold or greater difference in expression, and a false discovery rate (FDR) of 5%. Rather than using a post-hoc fold-change filtering criterion, we used the TREAT function from limma, which incorporates the fold-change into the statistic, meaning that instead of testing for genes which have fold-changes different from zero (H0:β = 0 versus HA:β ≠ 0), we tested whether the fold-change was greater than 1.1-fold in absolute value (H0:|β|<=1.1 versus HA:|β|>1.1). 46
Gene ontology and iPathwayGuide
Advaita iPathwayGuide scores pathways using the Impact Analysis method which considers two types of evidence 1) over-representation of DE genes in a given pathway relative to random chance (pORA) and 2) the perturbation of the pathway computed by taking into account factors such as the magnitude of each gene’s expression change, position within the pathway, and gene interactions (pAcc). In gene ontology (GO) analysis, the number of DE genes annotated to a term was compared with the number of DE genes expected by chance. Pathways and GO terms were determined to be significant at a false-discovery rate < 0.05.
Quantification of IL-6 and KIM-1 by ELISA
The DuoSet ® line of ELISAs from R&D Systems (Minneapolis, MN) were used to quantify the protein levels of IL-6 and KIM-1 (HAVCR1) from device effluents according to the manufacturer’s instructions. The levels of IL-6 and KIM-1, in 2% human serum were below the limit of detection. Samples were assayed in technical duplicates.
Statistical analysis of IL-6 and KIM-1 effluent biomarkers
To investigate whether there exists an interaction between condition and treatment groups, specifically to determine if changes in KIM-1 or IL-6 levels among treatment groups (media control, 2% human serum, and vitamin D) vary in flight versus ground conditions, we employed a linear mixed effect model. This model incorporates treatment group and condition as fixed effects and includes their interactions, with the donor serving as the random effect in a random intercept model. Prior to fitting the model, concentrations of KIM-1 and IL-6 were log 2 -transformed. The analysis was conducted using the lme function from the nlme package in R.
Vitamin D analysis
Stock solutions and standard curves were prepared and described previously. 47 PTEC maintenance media containing 3 μM DBP (PTEC-DBP media) was used as blank matrix. Quality control samples were prepared by diluting 1α,25(OH) 2 D 3 , 4β,25(OH) 2 D 3 , 24R,25(OH) 2 D 3 , and 25(OH)D 3 into PTEC-DBP media to a final concentration of 0.02, 0.02, 0.2, and 200 ng/mL, respectively. Effluent from the 48-hour 25(OH)D 3 treatment was collected and stored at <−80°C aboard the ISS U.S. National Laboratory. All treated samples remained frozen throughout the return trip to Earth and shipment to the University of Washington where they were stored at −80°C until extraction.
Because vitamin D and its metabolites are light sensitive, all steps were performed under low light. If the treatment sample volume was less than 500 μL, it was brought up to 500 μL with PTEC-DBP media. Proteins were precipitated by adding 1 mL of 1:1 isopropanol:methanol, vortexing, then incubating at room temperature for 10 minutes, followed by centrifugation at 16,100 × g for 10 minutes. The supernatant was decanted into silanized 16×100mm tubes (Fisher, 12100387) before liquid-liquid extraction by adding 3 mL of 60:40 hexane:methylene chloride. The tubes were capped, shaken on a horizontal shaker for 15 minutes, then centrifuged for 10 minutes at 16,100 × g in a swinging bucket rotor. The resultant upper solvent layer was transferred to clean silanized glass tubes. The liquid-liquid extraction procedure was repeated twice more, with the resultant upper solvent layer combined into a single tube. After complete evaporation of the solvent under a nitrogen stream at 40°C the residue was derivatized with 4-(4-(Dimethylamino)phenyl)-3H-1,2,4-triazole-3,5(4H)-dione (DAPTAD). DAPTAD stock solution (4 mg DAPTAD in 4 mL ethyl acetate) was diluted 1:1 in acetonitrile and 200 uL was added to the residue, vortexed, and incubated at room temperature for 45 minutes with vortex-mixing every 15 minutes. At the end of the incubation, the samples were dried down under a nitrogen stream at 40°C, resuspended in 52 μL methanol, and vortexed. 23 uL deionized water was added to the samples before vortexing and centrifuged for 15 minutes at 16,100 × g to remove excess DAPTAD and solid precipitate. The supernatant was transferred to amber liquid chromatography vials containing silanized glass inserts. The vials stored at −80°C until LC/MS/MS analysis the following day.
Vitamin D chromatography and mass spectrometry:
Chromatographic separation was performed as previously described. 47 Briefly, the method required an RP-Amide (2.1 × 150mm, 2.7 μm) column (Supelco 2–0943) at room temperature on a Shimadzu Nexera UPLC using water (A, 0.1% formic acid) and methanol (B, 0.1% formic acid) as the mobile phases. Analytes were separated using the following gradient: solvent B starting at 55% for the first minute, increasing linearly to 65% from 1–6 minutes, held at 65% until 8 minutes, increasing linearly to 75% from 8–15 minutes, held at 75% until 15.5 minutes, increasing linearly until 90% from 15.5–17 minutes, held at 90% until 23 minutes, then returning to 55% from 23–23.5 minutes. The injection volumes were 0.3 μL for analysis of 25(OH)D 3 while 10 uL was used for all other analytes. Analytes were detected using a positive ionization method on an AB Sciex 6500 QTRAP mass spectrometer (SCIEX, Framingham, MA). The parent and daughter ions were detected using multiple reaction monitoring with channels of m/z set to detect 25(OH)D 3 (619.2◊601.1), 25(OH)D 3 -d 6 (625.4◊341.1), 24R,25(OH) 2 D 3 (635.2◊341.1), 24R,25(OH) 2 D 3 -d6 (641.2◊341.1), 4β,25(OH)D 3 (635.2◊357.1), 1α,25(OH)D 3 (635.2◊357.1), 1α,25(OH)D 3 -d 6 (641.2◊357.1). The retention times for the analytes were as follows: 25(OH)D 3 , 18.16 min; 25(OH)D 3 -d 6 , 18.13 min; 24R,25(OH) 2 D 3 , 13.4 min; 24R,25(OH) 2 D 3 -d 6 , 13.34 min; 4β,25(OH)D 3 , 15.06 min; 1α,25(OH)D 3 , 15.3; and 1α,25(OH)D 3 -d 6 , 15.2 min.
Experimental design
After delivery and installation on board the ISS, cells were acclimated to microgravity for 6 days ( Fig. 8 ). Maintenance media cassettes were switched to treatment media cassettes containing either maintenance media, 2% human serum, or 3 μM vitamin D binding protein (DBP) + 1.5 μM vitamin D (25(OH)D 3 ) and treated for 48 hours prior to fixation with either RNAlater for RNAseq or formalin. Ground controls were conducted using minute-to-minute time matching with on station timing, with a 36 hour delay. | Results
PT-MPS platform and perfusion system
Nortis microfluidic chips are molded from polydimethylsiloxane, a semi-transparent, flexible, generally bio-compatible, and gas-permeable silicone polymer ( Fig. 1 ). While the footprint of the Nortis Triplex chip is relatively small, the equipment required for perfusion including chip platform, shelves, docking station, and pneumatic pump are relatively large. In order to reduce the footprint of the Triplex chip during perfusion and meet the levels of containment required by the National Aeronautics and Space Administration (NASA), we partnered with BioServe Space Technologies to design, machine, and fabricate a novel perfusion platform as previously described. 19
Experimental design and loss of devices to mold contamination
During disassembly of the chips from the housing unit, mold was observed on the exterior of several chips near the matrix port, cell seeding port, and edges. Mold was also observed within the flow path of some devices. Media overflow from the injection port was noted from 9.7% (14/144) of the channels before integration into the BioServe perfusion platform and may have contributed to the contamination. Consequently, channels that had visible mold, issues with RLT perfusion during RNA isolation, or notably discolored effluents were excluded from the analyses. In total, 65.3% (47/72) and 66.7% (48/72) of the ground and flight samples were included for effluent analyses, respectively. 72.2% (39/54) and 61.1% (33/54) of the ground and flight samples were analyzed by RNAseq, respectively. 94.4% (17/18) and 55.6% (10/18) of the ground and flight samples were used for the analysis of vitamin D metabolites, respectively. The number of usable samples for each donor separated by treatment and condition (ground vs flight) is summarized in Table 1 .
Transcriptional response of PTECs to 2% human serum in ground and flight conditions
To characterize the changes induced by serum exposure and identify condition-dependent responses, RNA from multiple replicates of control- or serum-treated PT-MPS was isolated and transcriptomic profiles were measured by RNA-seq. Exposure of PT-MPS to 2% normal human serum resulted in differential expression of 2,389 and 2,220 genes compared to control in the ground and flight conditions, respectively, based on a fold change of at least 1.1 at an adjusted p-value threshold of 0.05. In the ground condition, 1,144 and 1,245 genes were up- and down-regulated, respectively, whereas in the flight condition 1,108 and 1,112 genes were up- and down-regulated, respectively ( Fig. 2A ). No genes were differentially expressed between 1) ground media vs. flight media, 2) ground serum vs. flight serum, or 3) (ground serum vs. ground media) vs. (flight serum vs. flight media), indicating that 1) flight alone did not impact PTEC gene expression, 2) the relative expression level for a given gene between the ground and flight serum-treated samples was similar, and 3) the flight condition did not affect the magnitude of change in expression of a gene between control treatment and serum treatments (i.e., the difference in differences).
To elucidate the functional networks regulated by serum exposure in PTECs, we performed Advaita gene ontology analyses and iPathwayGuide analyses on the genes differentially expressed between serum and control treatments in the ground and flight conditions. Gene ontology enrichment analysis showed over-representation of the set of DE genes in cellular component terms, such as mitochondrion (GO:0005739), plasma membrane (GO:0005886), extracellular space (GO:0005615), and condensed chromosome (GO:0000793) ( Fig. 2B ). While the false discovery rate (FDR) adjusted p-value calculated for each cellular component term was different between ground and flight, the number of DE genes within a given term was comparable suggesting the overall response to serum between ground and flight chips was similar ( Fig. 2B ). Advaita Pathway analysis revealed that several cellular pathways were significantly affected by serum treatment in both the ground and flight conditions including cell cycle (ground: p = 3.19×10 −6 and flight: p = 7.7×10 −6 ), cytokine-cytokine receptor interaction (ground: p = 6.93×10 −6 and flight: p = 1.12×10 −8 ), chemokine signaling (ground: p = 0.0337 and flight: p = 0.0039), peroxisome proliferator-activated receptor (PPAR) signaling (ground: p = 1.49×10 −4 and flight: p = 2.54×10 −5 ), and metabolic pathways (ground: p = 3.96×10 −17 and flight: 8.16×10 −15 ) ( Figs. 2C and 2D ). Most of the genes within the cell cycle, cytokine-cytokine receptor interaction and chemokine signaling pathways were upregulated. Examination of the cell cycle pathway showed upregulation of genes that promote progression through the G1, S, G2, and M stages of the cell cycle ( Supplemental Figs. 1 and 2 ). Several members of the CC chemokine, CXC chemokine, and interleukin families were upregulated in the chemokine signaling and cytokine-cytokine receptor interaction pathways ( Supplemental Table 1 ). On the other hand, the PPAR signaling and metabolic pathways were downregulated. More specifically, genes within fatty acid metabolism (ground: p = 7.39×10 −6 and flight: p = 3.79×10 −7 ), tricarboxylic acid cycle (ground: p = 4.05×10 −3 and flight: p = 4.09×10 −7 ), and steroid biosynthesis (ground: p = 3.3×10 −5 and flight: p = 1.9×10 −5 ) pathways were downregulated ( Supplemental Table 2 ). Non-alcoholic fatty liver disease, Alzheimer disease, and Huntington disease pathways were affected only in the ground condition. Inspection of the DE genes within those pathways indicated that the statistical significance was largely driven by a group of mitochondrial genes associated with oxidative phosphorylation ( Fig. 1C and 1D ). Consistent with this observation, the oxidative phosphorylation pathway was far more impacted by 2% human serum treatment in the ground condition (p = 1.87×10 −24 , 63 DE genes) than in the flight condition (p = 2.5×10 −6 , 33 DE genes). Overall, these data suggest that serum exposure caused PTECs to activate a proliferative program, shift cellular bioenergetics, and promote a pro-inflammatory extracellular environment.
Next, we focused on gene-level changes to help delineate the biological consequence of exposure of PT-MPS to serum. First, we looked at metabolic reprogramming as it included the largest set of genes and was the most significantly impacted by serum treatment. Adenosine triphosphate (ATP) is a molecule that plays an important role in signal transduction (via being a substrate for kinases) and provides energy to drive a variety of cellular processes including transport of ions and solutes via ATP-binding cassette transporters such as the sodium-potassium-ATPase. PTECs generate the bulk of ATP through mitochondrial oxidative phosphorylation, wherein the transfer of electrons from nicotinamide adenine dinucleotide hydride (NADH) and dihydroflavin adenine dinucleotide (FADH 2 ) to molecular oxygen (O 2 ) through a series of protein complexes (complexes I-IV) in the mitochondrial inner membrane results in pumping of protons across the inner mitochondrial matrix membrane. This creates a transmembrane pH gradient that is subsequently utilized by complex V (or ATP synthase) to create ATP from adenosine diphosphate (ADP) and phosphate (P i ). 20 The DE genes within the oxidative phosphorylation pathway were found to be involved in the mitochondrial electron transport chain, with representation of all five of the major respiratory chain protein complexes ( Fig. 3 ). A greater number of genes were identified in the ground condition compared to flight condition (58 vs 27, respectively), though all DE genes in both conditions were downregulated by similar magnitudes suggesting that both ground and flight conditions had reduced mitochondrial respiration following serum treatment. The mitochondrion has its own genome which encodes thirteen proteins that participate in the electron transport chain. 20 , 21 Thirteen of those mitochondrial encoded genes were downregulated in the ground condition, but not the flight condition ( Fig. 4B ). The expression of four key factors that control mitochondrial gene transcription including RNA polymerase mitochondrial ( POLRMT ), transcription factor A mitochondrial ( TFAM ), transcription factor B2 mitochondrial ( TFB2M ), and transcription elongation factor mitochondrial ( TEFM ), were unchanged with serum treatment in both the flight and ground conditions (adj. p-value = 1).
To fuel the mitochondrial electron transport chain and oxidative phosphorylation, a steady source of reducing equivalents of NADH and FADH 2 are required. 20 β-oxidation of fatty acids and intermediary metabolism in the tricarboxylic acid (TCA) cycle, each of which occur in mitochondria, are the primary processes that generate FADH 2 and NADH. 20 β-oxidation is the stepwise enzymatic process that shortens fatty acid chains by two carbon atoms, producing acetyl coenzyme A (acetyl-CoA), NADH, and FADH 2 . 22 Acetyl-CoA can subsequently be utilized in the TCA cycle, a series of chemical reactions that oxidize acetate (derived from acetyl-CoA) to ultimately produce GTP, NADH, FADH 2 , and carbon dioxide. 20
Serum treatment of the PT-MPS significantly reduced the expression of a set of genes that function in β-oxidation, catabolism, and synthesis of fatty acids including carnitine palmitoyltransferase 1 ( CPT1A ), a transporter that is rate-limiting in fatty acid β-oxidation, and acetyl-CoA carboxylase alpha ( ACACA ) and fatty acid synthase ( FASN ), the rate-limiting enzymes in fatty acid biosynthesis ( Fig. 3C ). In addition, serum caused a modest, but broad downregulation of genes within the TCA cycle and solute carrier 25 (SLC25) family, that are mitochondrial membrane transporters for a variety of ions and metabolic intermediates ( Supplemental Table 1 ). Expression of the lipogenic enzymes ( GPAT3, GPAT4, AGPAT1–5, DGAT1 , and DGAT2 ) were unchanged (data not shown) whereas only lipin 1 ( LPIN1 ) was significantly reduced ( Supplemental Table 1 ). The expression of genes that are targets of the sterol-regulatory element binding transcription factor 2 ( SREBF2 ) were considerably repressed ( Supplemental Table 2 ). Because SREBF2 activity is inhibited in the presence of high cellular cholesterol levels, repression of SREBF2 target genes indicated that serum treatment increased cytosolic cholesterol levels. The transcriptional repression of genes involved in fatty acid metabolism, cholesterol metabolism, and intermediary metabolism (TCA cycle) strongly indicated that serum treatment caused metabolic reprograming in PTECs.
Next, we evaluated genes which could be potential maladaptive effectors in the tubular response to protein challenge. Serum treatment induced the expression of genes that function in the extracellular space and are associated with tissue remodeling. This group of genes included extracellular matrix proteins (e.g., fibronectin 1 ( FN1 ) and transforming growth factor beta induced ( TGFBI )), growth factors (e.g., platelet derived growth factor beta ( PDGFB )), transcription factors (e.g., mothers against decapentaplegic homolog 3 ( SMAD3 )), and extracellular matrix modifying enzymes (e.g., matrix metallopeptidase 7 ( MMP7 ), lysyl oxidase like 2 ( LOXL2 ), transglutaminase 2 ( TGM2 )) ( Fig. 3 ). It also induced pro-inflammatory molecules including chemokines (e.g., chemokine c-x-c motif chemokine ligand 5 ( CXCL5 )), cytokines (e.g., interleukin 8 ( IL8 ), interleukin 23a ( IL23A ), tumor necrosis factor ( TNF )), transcription factors (e.g., interferon regulatory factor 1 ( IRF1 )), and the lipocalin neutrophil gelatinase-associated lipocalin ( NGAL or LCN2 ) ( Fig. 4D ). IRF1, LCN2 ( NGAL ), PLAUR, LOXL2 , TGFBI , and SMAD3 are among the top 20 most significantly upregulated genes in PTECs following 2% human serum treatment ( Supplemental Tables 3 and 4 ).
Because the loss of metabolic capacity and gain of pro-inflammatory and pro-fibrotic attributes could be detrimental to PTEC function, we next looked at whether the expression of proximal marker genes changed. Serum treatment caused a downregulation of several genes selectively expressed by PTECs in vivo including the water channel aquaporin 1 ( AQP1 ) and the sodium potassium-transporting ATPase subunits alpha and beta ( ATP1B1 and ATP1A1 ) ( Fig. 4E ). Concomitantly, there was downregulation of three key transcriptional regulators: peroxisome proliferator-activated receptor gamma coactivator 1-alpha ( PPARGC1A ), estrogen related receptor alpha ( ESRRA ), and SREBF2 , while forkhead box M1 ( FOXM1 ) was induced ( Fig. 4E ). ATP1B1 and PPARGC1A were among the top 20 downregulated genes in the flight condition (Supplemental Table 4). Advaita upstream regulator analysis identified both TNF (ground: p = 9.0×10 −3 and flight: p = 2.4×10 −2 ) and FOXM1 (ground: p = 2.71×10 −9 and flight: p = 1.31×10 −8 ) as transcription factors likely to have been activated by serum treatment based on the number of consistently observed DE genes and gene interactions ( Table 2 ). Conversely, SREBF2 (ground: p = 5.86×10 −9 and flight: p = 8.31×10 −9 ) and PPARGC1A (ground: p = 1.2×10 −1 and flight: p = 1.75×10 −2 ) were predicted to have been inhibited by serum treatment ( Table 2 ). The target genes of PPARGC1A include genes involved in mitochondrial oxidative phosphorylation (e.g., CPT1A and EHHADHA ) as well as genes with regulatory roles (e.g., ESRRA and SIRT3 ), while those of FOXM1 tend to be related to cell proliferation (e.g., CCNA1 and CCNB1 ) and DNA damage response (e.g., RAD51 and RAD54 ).
PT-MPS biomarker responses to 2% human serum in flight and ground conditions
To validate the observations that 2% human serum appeared to promote transcription of cellular proliferation and induce proinflammatory genes, we quantified KIM-1 and IL-6 from device effluents. The magnitude of 2% human serum-induced secretion of KIM-1 and IL-6 varied by donor but was consistently increased relative to media control ( Fig. 4A and 4B ). Serum treatment significantly increased KIM-1 secretion relative to media control for both ground (20.9-fold, p < 0.0001) and flight conditions (14.5-fold, p < 0.0001) ( Fig. 4C ). There was no difference in serum-induced secretion of KIM-1 between ground and flight. IL-6 secretion was significantly increased by serum treatment relative to media control in both ground (3.3-fold, p = 0.0004) and flight conditions (5.2-fold, p < 0.0001) ( Fig. 4D ). The difference in IL-6 change from media control to serum between the flight and ground condition was not statistically different (p = 0.073, linear mixed effects model) suggesting that there was no interaction between microgravity and serum exposure on IL-6 secretion.
Transcriptional response of PTECs to vitamin D in ground and flight conditions:
To characterize the changes induced by vitamin D exposure and identify condition-dependent responses, RNA from multiple replicates of control- or 25(OH)D 3 -treated PT-MPS was isolated and transcriptomic profiles were measured by RNA-seq. Comparing the differentially expressed (DE) genes revealed 598 DE and 147 DE in the ground and flight groups, respectively ( Fig. 5A ). In each condition roughly half the genes were upregulated, and half were downregulated. Gene ontology enrichment analysis revealed over-representation of the set of DE genes in cellular component terms such as mitochondrion (GO:0005739) and mitochondrial respiratory chain (GO:0005746) ( Fig. 5B ). The number of DE genes within each term varied by condition, with the ground condition having a greater number DE in each term. Advaita iPathwayGuide analysis showed the pathways most affected by vitamin D treatment were metabolic pathways, oxidative phosphorylation, and cytokine-cytokine receptor interaction ( Fig. 5C ). Oxidative phosphorylation was more affected by vitamin D treatment on ground (p = 1.43×10 −19 , 42 DE genes) than in flight (p = 1.22×10 −8 , 19 DE genes). Consistent with this observation, more genes within the electron transport chain were downregulated in ground than in flight ( Fig. 5D ). Vitamin D treatment induced several members of the c-x-c motif ligand family in both conditions including CXCL1, CXCL2, CXCL3 and CXCL6 , while the cytokine interleukin 6 ( IL6 ) was only induced with 25(OH)D 3 treatment for the ground condition ( Fig. 6E ). The proliferation associated genes FOXM1 and marker of proliferation Ki67 ( Ki67 ) were only significantly upregulated in the ground condition ( Fig. 5E ).
Impact of microgravity on PTEC metabolism of vitamin D
25(OH)D 3 undergoes multiple metabolic reactions within PTECs including bioactivation to 1α,25(OH) 2 D 3 via CYP27B1 as well as inactivation through CYP24A1 mediated conversion to 24R,25(OH) 2 D 3 and CYP3A5 mediated conversion to 4β,25(OH) 2 D 3 ( Fig. 6A ) To evaluate the impact of microgravity on PTEC metabolism of 25(OH)D 3 , we quantified 25(OH)D 3 and its primary metabolites 1α,25-dihydroxy vitamin D3, 4β,25-dihydroxy vitamin D3, and 24R,25-dihydroxy vitamin D3 in the device effluents. Expression of CYP3A5, CYP24A1 , and CYP27B1 was detected in all samples ( Fig. 6B ). Formation of 1α,25(OH) 2 D 3 and 4β,25(OH) 2 D 3 was consistent across donors, whereas formation of 24R,25(OH) 2 D 3 varied by donor ( Fig. 6C ). Formation of 1α,25(OH) 2 D 3 (p = 0.1036), 4β,25(OH) 2 D 3 (p = 0.4451), and 24R,25(OH) 2 D 3 (p = 0.2228) did not differ between ground and flight ( Fig. 6D ). Consistent with formation of 1α,25(OH) 2 D 3 and agonism of the VDR, the expression of CYP24A1 but not CYP3A5 or CYP27B1 was significantly higher in vitamin D treated samples than media controls for both ground and flight conditions ( Fig. 6E ). The expression of CYP24A1 was correlated with formation of 24R,25(OH) 2 D 3 in ground samples (r = 0.77, p = 0.008) but not flight samples (r = 0.17, p = 0.715) ( Fig. 6F ).
PT-MPS biomarker responses to vitamin D in flight and ground conditions
To assess whether treatment with vitamin D resulted in effluent biomarkers, similarly to PT-MPS, we measured KIM-1 and IL-6 in flight and ground samples. Comparison of samples for both biomarkers demonstrated consistent increases for all four donors for flight and ground, although interpretation is limited by sample availability ( Figs. 7A and 7B ). Vitamin D treatment significantly increased KIM-1 secretion relative to media control for both ground (9.2-fold, p < 0.0001) and flight conditions (5.2-fold, p < 0.0001) ( Fig. 7C ). IL-6 secretion was significantly increased by Vitamin D treatment in ground (p < 0.0001) and flight (p = 0.0018) ( Fig. 7D ). In addition, when comparing levels of IL-6 in vitamin D-treated PT-MPS between ground and flight, the levels in flight were significantly lower in comparison to ground (p-value = 0.001). | Discussion
Plasma levels of 1α,25(OH) 2 D 3 have been shown to decrease over time in astronauts on flight. This phenomenon could be due to several known factors including 1) changes in hydrostatic pressure that drive the movement of water and protein from the intravascular space to intracellular and interstitial compartments resulting in hemodilution 23 or 2) a partial decoupling of the renin-angiotensin-aldosterone-vasopressin system due to hypercalciuria secondary to bone mineral loss on orbit. 24 Our team explored the hypothesis that microgravity-induced changes in PTEC-mediated metabolism of Vitamin D might also contribute to the observed decline in plasma levels of 1α,25(OH) 2 D 3 .
In both the ground condition and flight condition, PTECs treated with 25(OH)D 3 generated 1α,25(OH) 2 D 3 , 4β,25(OH) 2 D 3 , and 24R,25(OH) 2 D 3 , the primary metabolites of CYP27B1, CYP3A5, and CYP24A1, respectively ( Fig. 6 ). The levels of these metabolites did not differ between ground and flight conditions. Induction of CYP24A1 , a canonical target gene of the VDR, was robust indicating that the feedback mechanism within PTECs was intact and did not differ between ground and flight conditions ( Fig. 6 ). We conclude that microgravity did not alter the metabolic activity of CYP27B1, CYP24A1, or CYP3A5, nor did it significantly alter the inducibility of CYP24A1 , a feedback mechanism which helps to tightly regulate plasma levels of 1α,25(OH) 2 D 3 . Regarding effluent biomarker responses to 25(OH)D 3 , we observed increases in both KIM-1 and Il-6, for both flight and ground groups ( Fig. 7 ). While the levels were generally lower than what was observed with 2% normal human serum, it is still interesting to note given these biomarkers are typically associated with tubular injury. Also of note is the differences between flight and ground responses for Il-6, where levels were significantly lower in flight, suggesting attenuated responses which are congruent with the lower number of DEGs ( Fig. 5B ).
The metabolite 1α,24R,25-trihydroxy vitamin D 3 (1α,24R,25 (OH) 3 D 3 ), which is generated by CYP27B1-mediated metabolism of 1α,25(OH) 2 D 3 , was not measured in our assay. Therefore, it is unclear whether the reason for the relatively low levels of 1α,25(OH) 2 D 3 was poor formation from 25(OH)D 3 or rapid elimination to 1α,24R,25(OH) 3 D 3 . The levels of 25(OH)D 3 (~ 750 ng/mL) used in our study were supraphysiological and far exceeded those of 1α,25(OH) 2 D 3 (~ 0.04 ng/mL). The average ratio of 25(OH)D 3 :1α,25(OH) 2 D 3 in human plasma is ~ 500, whereas in this study it was ~ 18,750. 25 While 1α,25(OH) 2 D 3 is the most potent vitamin D metabolite binding to the VDR, 25(OH)D 3 can compete with 1α,25(OH) 2 D 3 for binding of intestinal chromatin homogenates when administered at concentrations 150-fold higher than that of 1α,25(OH) 2 D 3 . 26 Similarly, 25(OH)D 3 stimulates calcium transport, a marker of VDR activity, in perfused intestine when administered at levels 200-times that of 1α,25(OH) 2 D 3 . 26 In vitro , there is strong evidence that 25(OH)D 3 can activate the VDR. 27 , 28 Thus, both 25(OH)D 3 and 1α,25(OH) 2 D 3 can elicit VDR responses if the ratio of 25(OH)D 3 :1α,25(OH) 2 D 3 is greater than 200. Consequently, as the actual intracellular levels of 1α,25(OH) 2 D 3 were unknown in this study, it is unclear whether activation of the VDR and induction of CYP24A1 was triggered by 25(OH)D 3 or 1α,25(OH) 2 D 3 . Nevertheless, we can conclude that microgravity did not appear to affect metabolism of 25(OH)D 3 via CYP27B1, CYP3A5, or CYP24A1.
We also investigated the possibility that microgravity could affect the response of PTECs to proteinuria; 29 , 30 we tested whether biological response was altered in flight compared to ground condition by treating the PTECs with 2% normal human serum. In both ground and flight conditions, pathway analysis revealed serum treatment induced genes associated with proliferation, inflammation, and reorganization of the extracellular matrix environment, with a concomitant downregulation of metabolic and biosynthetic pathways. The transcriptional and protein-level response of PTECs to 2% normal human serum did not differ between ground and flight conditions. While there was no condition-dependent response of PTECs to 2% human serum treatment, the observed transcriptional responses suggest PTECs have the potential to promote renal inflammation and fibrosis during proteinuria.
One mechanism by which PTECs acquire a proinflammatory phenotype is through cell cycle arrest at either the G1/S or G2/M phases of the cell cycle. While there are no definitive transcriptional markers of cell cycle arrest, we observed induction of genes involved in cell cycle arrest. For example, SMAD3 was the first and second most significantly induced gene by serum treatment in ground and flight, respectively ( Supplemental Tables 3 and 4 ). SMAD3 is strongly associated with renal fibrosis as SMAD3 knockout prevents fibrosis in mouse models of UUO, diabetic nephropathy, hypertensive nephropathy, and chronic aristolochic acid nephropathy. 31 – 34 A potential mechanism by which SMAD3 contributes to renal fibrosis is promotion of cell cycle arrest. For example, SMAD3 contributes to c reactive protein mediated G1/S cell cycle arrest in a mouse model of IRI and in human kidney 2 (HK-2) cells. 35 Arrest of proximal tubule cells in the G2/M phase has also been implicated in acquisition of a proinflammatory secretory phenotype in IRI, UUO, and aristolochic acid nephropathy mouse models of AKI. 36 Arrest in the G2/M phase would be expected to be associated with higher levels of DNA damage response transcripts. In our data, we observed that several DNA damage response transcripts were induced including RAD51, RAD54, and BRCA1. However, RAD51, RAD54 , and BRCA1 have also been shown to be downstream targets of FOXM1 during epithelial repair after IRI. 37 Whether serum treatment increases the proportion of PTECs arrested at either the G1/S or G2/M stages should be investigated in future studies.
In summary, we demonstrated that microgravity neither altered PTEC metabolism of vitamin D nor did it induce a unique response of PTECs to human serum. The decline in the plasma levels of 1α,25(OH) 2 D 3 in astronauts in flight appears to be independent of a change in renal expression of vitamin D metabolizing enzymes. Future efforts should focus on delineating the role of PTH and serum calcium on PTEC metabolism of vitamin D. The overall response of PTECs to serum challenge is congruent with the maladaptive repair response in vivo in which a failure of PTECs to re-differentiate after tubular injury is associated with tissue inflammation and fibrosis. The factors regulating PTEC differentiation status during proteinuric and disease states should further be elucidated and their potential as novel therapeutic targets for treating and preventing renal inflammation and fibrosis should be investigated. | Author contributions
KAL, KJI, CKY, EJK, and JH developed and designed the experiments, KAL, KJI, JY, JB conducted the experiments. KAL, KJI, CKY, JH, LW, TKB, JWM and EJK conducted data analysis. JC and KET conducted the vitamin D analyses. SC and PK led the development of hardware. All authors contributed to writing and editing the manuscript. Portions of the work presented in this manuscript are derived from author Dr. Kevin A. Lidberg’s doctoral thesis “Application of a Renal Proximal Tubule Microphysiological System for Drug Safety Assessment and Disease Modeling”, https://digital.lib.washington.edu/researchworks/handle/1773/47666 .
The microgravity environment aboard the International Space Station (ISS) provides a unique stressor that can help understand underlying cellular and molecular drivers of pathological changes observed in astronauts with the ultimate goals of developing strategies to enable long-term spaceflight and better treatment of diseases on Earth. We used this unique environment to evaluate the effects of microgravity on kidney proximal tubule epithelial cell (PTEC) response to serum exposure and vitamin D biotransformation capacity.
To test if microgravity alters the pathologic response of the proximal tubule to serum exposure, we treated PTECs cultured in a microphysiological system (PT-MPS) with human serum and measured biomarkers of toxicity and inflammation (KIM-1 and IL-6) and conducted global transcriptomics via RNAseq on cells undergoing flight (microgravity) and respective controls (ground). We also treated 3D cultured PTECs with 25(OH)D 3 (vitamin D) and monitored vitamin D metabolite formation, conducted global transcriptomics via RNAseq, and evaluated transcript expression of CYP27B1, CYP24A1, or CYP3A5 in PTECs undergoing flight (microgravity) and respective ground controls.
We demonstrated that microgravity neither altered PTEC metabolism of vitamin D nor did it induce a unique response of PTECs to human serum, suggesting that these fundamental biochemical pathways in the kidney proximal tubule are not significantly altered by short-term exposure to microgravity. Given the prospect of extended spaceflight, more study is needed to determine if these responses are consistent with extended (> 6 month) exposure to microgravity. | Acknowledgements
This work was supported by the National Center for Advancing Translational Sciences (UH3TR000504, UG3TR002158 and UH3TR002178), jointly by the National Center for Advancing Translational Sciences and the Center for the Advancement of Science in Space (UG3TR002178), the National Institute of Environmental Health Sciences (P30ES00703 & T32 ES007032) and an unrestricted gift from Northwest Kidney Centers to the Kidney Research Institute. BioServe’s work was supported in part by NASA contracts 80JSC020F0019 and 80JSC017F0129. The funders played no role in study design, data collection, analysis and interpretation of data, or the writing of this manuscript.
We would like to thank the Life Science and Research Support Staff at Kennedy Space Center, in particular John Catechis and Anne Currin. In addition, we would like to express our gratitude to SpaceX and NASA for supporting our studies on CRS-17, especially ISS crewmembers Christina Koch & Anne McClain.
Data availability
The datasets generated and/or analyzed during the current study are available in the BioSystics data repository [ https://www.biosystics.com/ ]. | CC BY | no | 2024-01-16 23:35:07 | Res Sq. 2023 Dec 21;:rs.3.rs-3778779 | oa_package/4c/3a/PMC10775397.tar.gz |
||
PMC10775398 | 38196640 | Introduction
According to National Alliance on Mental Illness (NAMI), in 2021 22.8 percent (57.8 million) of United States (US) adults experienced a mental illness; the annual prevalence is 21.4% in African Americans. 47.2 percent of adults with mental illness received treatment, with treatment rates of 39.4 percent in African Americans. Treatment rates in women were reported as 51.4 percent, while male rates were 40 percent. 1 Results from survey data from the Heatly Minds Study between 2013 and 2021, which included greater than 350,000 students at 373 campuses, reported that greater than 60 percent of students in 2020–2021 met the criteria for one or more mental health problems. 2 In the US adults aged 18–44, there are nearly 600,000 hospitalizations each year from mood disorders and psychosis spectrum. 1 Demographics have been linked to the utilization of mental health resources, with older adults being more likely to use resources. 3
Barriers have been attributed to the lack of utilization of mental health resources. Results from the WHO World Mental Health International College Student Initiative, which assessed barriers in first-year college students, stated that wanting to handle the mental health problem alone, wanting to talk to family or friends instead, or being too embarrassed to seek help were all barriers rated the most important in college students. 4 In a review conducted by Mahogany S. Anderson, it is stated that even if there are available and accessible resources, African American students are historically less likely to utilize them due to barriers such as stigma, cultural mistrust, or racial/ethnic identity. 3 African Americans have also utilized coping mechanisms due to the lack of mental health resources available in the community. Another barrier that African Americans face is awareness. Awareness can be the lack of knowledge about mental health resources or the lack of awareness of the location of these resources in the community. 5 While all of these factors play a part in the overall cause of lack of utilization, each person most likely has their own unique reasoning as to why they do not or will not use the mental health resources that are available to them. The survey conducted through this research aims to examine exactly what that cause is and how students perceive their reasoning for underutilization. | Methods
A peer-reviewed literature search was done using Google Scholar and PubMed databases between 2001 and 2019. Keywords like “African American” “Mental Health” “Underutilization” and “Barriers” were used to narrow the search. A total of twelve articles were reviewed and determined to be in the scope of this literature review. One hundred fifty-nine surveys were administered to students attending Xavier University of Louisiana to assess mental health resource utilization on campus. The surveys were administered to students throughout the university on a volunteer basis without regard to classification. Each student completed a five-questionnaire survey comprised of questions assessing mental health resources on/off campus, preferred mental health resources, and students’ likeliness to continue utilizing resources.
Study Design
This was a focused, online survey-based study conducted from September 2022 to November 2022 of students who attended Xavier University of Louisiana. Approval was obtained by Xavier University of Louisiana’s Institutional Review Board. All students who are currently enrolled at Xavier University of Louisiana were eligible for the study. Faculty, staff, and other members of the community were excluded from this study.
Statistical Analysis
SPSS was used to analyze data in order to assess the utilization of mental health resources amongst students on an HBCU campus. | Results
159 participants were included in this study. 68 of the students were under 21 years of age, while 91 of the students were 21 years of age or older. 85.53% (136) of the participants were female and 13.84% (22) were male. 1 participant responded as non-binary. 60.24% (100) of the students were Black or African American, while 25.90% (43) were White American, 9.64% (16) were Asian, and 1.20% (2) were American Indian/Alaska Native. 3.01% (5) chose not to disclose race. 86.54% (135) of the students were also non-Hispanic or Latino, with 9.64% (15) being Hispanic or Latino. 3.85% (6) chose not to disclose ethnicity. The classification of participants within the study was spread throughout each level, with it being split between 87 graduate students and 72 undergraduate students and having a p-value of less than 0.05.
Participants had the option to describe the barriers they faced to utilization of mental health resources through a multiple-choice style question, as well as an “other” box to type their response. The following answers resulted from this question. 39 students chose not to answer. 12 students answered that there was concern regarding judgement if resources were utilized. 62 students answered that they did not feel resources were needed at the time. 40 students answered that they were not aware of current available resources. 64 students answered that they felt a time constraint. 5 students specified other reasons. Other reasons included that they convinced themselves the resources were for people with bigger problems, there was a language barrier, they have tried to use resources before and they did not help, as well as there being a lack of money and they did not feel like sitting down to talk about their feelings.
Another question in the survey asked participants what forms of coping mechanisms they found most beneficial to them. 115 participants prefer to speak to friends and/or family members. 65 participants prefer to handle the situation on their own. 32 prefer to utilize mental health resources. 13 participants prefer not to disclose. 3 participants specified other methods, including anxiety medication, as well as meditation, journaling, and art.
Lastly, participants were asked to rate their level of comfort in seeking help or continuing to utilize mental health resources on or off campus on a scale of 1–10. 16 participants responded they were not at all comfortable, whereas 26 participants responded that they were completely comfortable. | Discussion
Students were predominately African American (60.24%) and female (85.53%). These statistics are consistent with the demographics of the University at study being mostly African American female students. Of the 159 surveys completed, 13 responded they have used mental health resources on campus. Approximately 61.5% (8/13) are satisfied or very satisfied with the services. 29 responded they have used off campus mental health resources. Approximately 41.4% (12/29) are satisfied or very satisfied with the services. 62 (39%) responded that time constraint was a barrier faced in utilizing mental health resources. 60 (38%) responded that they did not feel that mental health resources were currently needed. 40 (25%) responded that they were not aware of the mental health resources available. There is a significant association between classification and comfort level continuing to utilize mental health resources on or off campus (p = 0.02).
Limitations
Limitations include a lack of a validated tool to assess utilization. Additionally, this data may not be fully representative of the population on campus. As some students may not feel comfortable in busier parts of campus, and those interested in mental health may have been more willing than others to participate in the survey. Data collected from a single institution could be reflective of factors pertaining to the institution rather than of HBCUs as a whole. Despite these limitations, this data does reflect similar rates of utilization of African American college students in other settings according to previous literature. This study is still able to illustrate the importance of future focus on mental health utilization in African American students in the HBCU setting. | Conclusions
This study shows that there are multiple barriers that have been attributed to the underutilization of mental health resources both on and off campus. According to the results of this survey, the majority of students lack time to utilize or denied the need for any mental health resources. Further research should be conducted regarding the association between classification and comfort level. Further research could also be done to gather the best way to promote the various mental health resources that are available that could benefit different individuals. These results will allow for an opportunity to improve utilization of both on and off-campus mental health resources. | Author Contribution
Ahlam Ayyad contributed to the conception of the study, study design, data analysis, and first manuscript draft preparation. Thomas Maestri was a research mentor that assisted in every step of the process. Savannah Harris, Nina Casanova, and Hanan Ibrahim contributed to the literature review, survey administration, and manuscript drafts. All authors commented and revised the manuscript and gave the final approval.
Objective:
In 2018, a survey was conducted with students on a Historically Black College and University (HBCU) campus that showed a significant lack of utilization of both on and off campus mental health resources. The primary outcome of this survey is to evaluate lack of utilization of mental health resources at an HBCU to effectively promote student mental wellness.
Methods:
A short electronic survey was administered to students to assess underutilization.
Results:
Subjects were predominately African American (60.24%) and female (85.53%). Of the 159 surveys completed, 13 responded they have used on campus mental health resources. Approximately 61.5% (8/13) are satisfied or very satisfied with the services. 29 responded they have used off campus mental health resources. Approximately 41.4% (12/29) are satisfied or very satisfied with the services. 62 (39%) responded that time constraint was a barrier faced in utilizing mental health resources. 60 (38%) responded that they did not feel that mental health resources were currently needed. 40 (25%) responded that they were not aware of mental health resources available. There is a significant association between classification and comfort level continuing to utilize mental health resources on or off campus (p = 0.02).
Conclusions:
There are multiple barriers that have attributed to the underutilization of mental health resources. According to the results of this survey, the majority of students lacked time to utilize or denied need for any mental health resources. These results will allow for an opportunity to improve utilization of both on and off campus mental health resources. | Acknowledgment:
This work was supported by NIH RCMI program at Xavier University of Louisiana through Gran U54MD00795. Additionally, we would like to acknowledge Dr.Carroll J. Diaz, Jr., Ph.D. for assisting in statistical analysis of data. | CC BY | no | 2024-01-16 23:35:07 | Res Sq. 2023 Dec 19;:rs.3.rs-3760662 | oa_package/15/6f/PMC10775398.tar.gz |
|
PMC10775400 | 38196619 | Introduction
Uveal melanoma (UM) is the most common intraocular malignancy in adults, with a high rate of metastasis and a poor prognosis. 1 The accurate diagnosis of small UM is challenging due to similar clinical characteristics to benign choroidal nevi. Tumors diagnosed as choroidal nevi that subsequently grow during an observation period are at increased risk for metastasis. 2 , 3 Therefore, improving the diagnosis of UM and choroidal nevi at the time of initial presentation has the potential to improve clinical outcomes.
Most types of cancer require a tissue diagnosis via biopsy prior to making a treatment decision. However, the diagnosis of UM is usually made using a clinical diagnosis due to the morbidity associated with ocular biopsy. 4 – 6 The risks associated with biopsy in small choroidal tumors are especially high. 7 , 8 Therefore, clinicians base their diagnosis on careful clinical examination and multimodal imaging, including fundus photography, autofluorescence, optical coherence tomography (OCT), and ultrasonography, to evaluate patients with melanocytic choroidal tumors. However, indeterminate lesions where a definite diagnosis cannot be made are often observed to monitor for tumor growth with serial examination and imaging.
Tumor growth is used as a surrogate for malignant transformation and, therefore, an indication for treatment in patients with indeterminate melanocytic choroidal tumors. Large retrospective studies have been performed to identify clinical risk factors that predict malignant transformation in order to identify patients at high risk for tumor growth. Patients at high risk for malignant transformation are most often treated with ionizing radiation or enucleation based on the clinical situation including tumor size, extent of extraocular extension, vision, and the patient’s preference. Conversely, patients at low risk for malignant transformation are often observed to avoid the unnecessary ocular morbidity associated with treatment of benign choroidal nevi.
The risk factors for malignant transformation have been well characterized and include tumor thickness greater than 2 mm, subretinal fluid, visual symptoms, orange pigment, proximity to the optic disc, ultrasonographic hollowness, and the absence of drusen. 9 – 12 The presence of three or more of these features suggests a greater than 50% risk of malignant transformation. 13 The use of multimodal imaging has been shown to be capable of identifying these risk factors and therefore serves as an important tool for clinicians evaluating these lesions 14 . However, improving our ability to predict malignant transformation and accurately diagnose small UM can reduce the risk of metastasis and save patient lives.
Machine learning (ML) offers a promising approach to enhance the identification and evaluation of intraocular lesions, thereby providing a versatile tool for clinicians. Deep learning, a subset of ML, has recently advanced many areas of computer vision, including image classification, 15 – 19 object detection, 20 – 23 and semantic segmentation. 24 – 29 Convolutional neural networks (CNNs) assist in disease diagnostics and progress our understanding of the possibilities of extracted information from various imaging techniques. Despite its significant potential, few studies have looked at the role of ML in the diagnosis of melanocytic choroidal tumors. 30 In the present study, we analyze the utility of ML in the evaluation of choroidal nevi and UM. Our objective was to train an ML algorithm to identify risk factors for UM using ultra-widefield fundus images and B-scan ultrasonography. In addition to providing useful information for the diagnosis of the disease itself, we also attempt to maximize the information we can extract from each imaging modality. As such, this ML algorithm may be a useful tool for evaluating melanocytic choroidal tumors for early detection of malignancy. | Methods
This retrospective study included analysis of 223 eyes from 221 patients with melanocytic choroidal lesions seen at the eye clinic at the University of Illinois at Chicago between 01/2010 and 07/2022 ( Table 1 ). The study was approved by the institutional review board (IRB) and patient records were collected from the electronic medical record system. The inclusion criteria for this study were patients with a clinical diagnosis of choroidal nevi or UM. Exclusion criteria were patients who have been treated prior to presentation and patients without both ultra-widefield imaging (Optos PLC, Dunfermline, Fife, Scotland, UK) and B-scan ultrasound (Eye Cubed and ABSolu, Lumibird Medical, Rennes, France) taken at the time of initial presentation. The patients were divided into two groups: (1) patients diagnosed with a choroidal nevus and (2) patients diagnosed with UM. The clinical examination and diagnosis at the time of presentation were taken as the ground truth for diagnosis and the presence of risk factors for malignant transformation included lesion thickness, subretinal fluid, orange pigment, proximity to optic nerve, ultrasound hollowness, and drusen. The risk factors were verified by a single investigator (MJH) using all multimodal imaging available from the time of diagnosis including ultra-widefield images (UWF), autofluorescence images, A-scan and B-scan ultrasonography (US), and OCT. We also explore prediction of the categorization into choroidal nevus or UM for each image. The UWF images and B-scan US from all patients were collected and analyzed ( Table 2 ).
The AI-based models were developed using ResNet 18 architecture. 18 TheResNet architecture consisted of two parts: (1) a feature extractor, which processed UWF or US images to extract features as an output, and (2) a task-specific header that used features from the previous layers to generate task-specific outputs (i.e., classification output 0 for absence of or apical overlying subretinal fluid, or output 1 for presence of subretinal fluid). Text information within the US images was cropped, and images were then scaled to 512 × 512 pixels before being fed into the models. Cross-entropy was used as the loss function, and the Adam algorithm was used for the optimizer. The learning rate was set to 0.00005 and the models were trained for 50 epochs. The best model was selected based on the lowest loss observed in the testing set.
The performance of the ML model was measured by the area under the curve (AUC). The bootstrap confidence interval (CI) for the AUC was obtained using the percentiles of the bootstrap distribution. For instance, the 95% CI was obtained using the 2.5th and 97.5th percentiles of the bootstrap distribution. The 95% CI was computed based on 1000 bootstrap replicates.
We investigated the region or tissue by generating saliency maps for visual explanations of each model using Gradient-Weighted Class Activation Mapping (Grad-CAM). 31 Grad-CAM uses the gradients of the target concept, such as ‘UM’ in our classification network, flowing into the final convolutional layer. This produces a coarse localization map highlighting the important regions in the image for predicting the concept. The primary goal of Grad-CAM is to reflect the degree of importance of pixels (regions of interest) to the human visual system, allowing us to make decisions on the classification task. | Results
Patient demographics
A total of 115 patients with choroidal nevi and 108 patients with UM were included in this study. The mean age of patients with choroidal nevi was 64.9 years (range: 27–95), while patients with UM had a mean age of 66.1 years (range: 30–97) ( Table 1 ). The majority of patients were female in the choroidal nevus group (75, 65.2%), and the UM group had a more balanced gender distribution with 53 (49.1%) males and 55 (50.1%) females. The racial distribution of patients in both groups was predominantly White, with 76 (73.8%) patients in the choroidal nevus group and 82 (83.7%) patients in the UM group among patients with available race data. Similarly, the ethnicity of patients in both groups was predominantly non-Hispanic or Latino, with 91 (85.8%) patients in the choroidal nevus group and 98 (96.1%) patients in the UM group.
Clinical features
The mean lesion thickness was 1.6 mm for choroidal nevi and 5.9 mm for UM. The presence of subretinal fluid was observed in 5 (4.3%) patients with choroidal nevi and 75 (69.4%) patients with UM. Orange pigment was present in 3 (2.6%) patients with choroidal nevi and 34 (31.5%) patients with UM. The mean margin to the optic nerve head was 5.0 mm for choroidal nevi and 3.1 mm for UM. Drusen were present in 42 (36.5%) patients with choroidal nevi and 35 (32.4%) patients with UM. Ultrasonographic hollowness was observed in 18 (15.7%) patients with choroidal nevi and 86 (79.6%) patients with UM. Finally, a mushroom shape was not observed in any patients with choroidal nevi, while it was present in 16 (14.8%) patients with UM.
AI-based models
We trained 11 models in this study. The AI-based model achieved the following performance metrics: AUC of 0.982 (95% CI: 0.875–1), 0.964 (95% CI: 0.792–1) for thickness prediction with US or UWF, 0.963 (95% CI: 0.760–1) and 0.870 (95% CI: 0.560–1) for subretinal fluid prediction with US or UWF, 0.735 (95% CI: 0.333–1) for orange pigment prediction with UWF, 0.520 (95% CI: 0–1) and 0.667 (95% CI: 0.111–1) for margin prediction with US or UWF, 0.663 (95% CI: 0.222–1) for drusen prediction with UWF, 0.919 (95% CI: 0.625–1) for hollowness prediction with US, and 1 (95% CI: 1–1) and 0.894 (95% CI: 0.643–1) for category prediction with US or UWF, respectively ( Fig. 1 ).
For models trained with US images, the sensitivity/specificity were as follows: 0.900/0.818 for thickness, 0.900/0.818 for subretinal fluid, 0.867/0.200 for margin to optic nerve head, 0.889/0.727 for ultrasonographic hollowness, and 0.818/1.000 for categories, respectively. For models trained with UWF images, the sensitivity/specificity were: 1.000/0.727 for thickness, 0.667/0.833 for subretinal fluid, 0/1 for orange pigment, 0.800/0.600 for margin, 0.375/0.846 for drusen, and 0.636/0.833 for categories, respectively. Grad-CAM images were generated to evaluate any localizing information.
Grad-CAM images
Localization maps highlight the important pixels (regions of interest) resulted in patterns that provided insight into the classification tasks. In the category prediction model from US images, the highest probability regions in the overlying Grad-CAM images tended to include both the lesion of interest and its surrounding tissues. For instance, the highlighted region of a UM included the orbit posterior to the tumor as well as ocular regions adjacent to the lesion ( Fig. 2 ). Additionally, a subset of images highlighted the anterior segment on the US image in the location of the iris and lens, which have been implicated in patients with uveal melanoma. 32 , 33
From the UWF images, the Grad-CAM images most often correctly located the tumor region for UM. However, the localization maps for nevi tended to be broader and surrounded the lesions rather than focusing on the nevi themselves ( Fig. 2 ). In one false negative case in the category prediction from US images (Supplemental Fig. 1, score of 0.289), the lesion is 1.94 mm in height but the largest basal diameter is 7.54 mm. As compared to other images of UM in the testing dataset, this image has the smallest thickness.
The subretinal fluid prediction model from US images often highlighted a two-centric region in Grad-CAM, corresponding to the subretinal fluid on two sides of the lesion with a confidence score of 0.973 ( Fig. 3 ). The model was also capable of locating subretinal fluid from the UWF image with a confidence score of 0.759. Our model also consistently evaluated the predicted hollowness through visualization focusing more often on the lesion itself (Supplemental Fig. 2). | Discussion
The accurate diagnosis of small melanocytic choroidal tumors is challenging due to similar clinical characteristics between benign choroidal nevi and small malignant UM. These patients benefit from the careful evaluation by an ocular oncologist experienced in managing intraocular tumors. Current practice uses clinical examination and multimodal imaging to predict malignant transformation and thereby guide the diagnosis and management of these tumors. Our study provides proof of concept for ML to identify risk factors for malignant transformation at the time of initial presentation.
Clinical features associated with the risk of malignancy have been well established, including the presence of orange pigment and subretinal fluid. 34 , 35 Shields et al. conducted a study to identify risk factors for malignant transformation of choroidal nevi, comprising the largest retrospective case series at the time. 11 These risk factors included tumor thickness greater than 2 mm on ultrasonography, subretinal fluid, patient symptoms, orange pigment, and tumor margin within 3 mm of the optic disc. 11 In 2009, Shields et al. expanded their case series and identified additional risk factors to include ultrasound hollowness and the absence of a halo or drusen overlying the lesion. 13 They were combined to form the well-known mnemonic “To Find Small Ocular Melanoma Using Helpful Hints Daily” (TFSOM-UHHD).
While the original TFSOM system provided an evidence-based method for predicting malignant transformation of melanocytic choroidal tumors, Shields et al. further extended the system in 2019 with the development of the “To Find Small Ocular Melanoma Doing IMaging” (TFSOM-DIM) criteria. 14 TFSOM-DIM incorporates multimodal imaging techniques in the identification of risk factors, including subretinal fluid on OCT, orange pigment on autofluorescence, and a basal diameter of at least 0.5 mm on fundus photography. 14 These additional imaging techniques provided a more nuanced approach to identifying UM, 36 which have been evaluated in subsequent studies. Geiger et al. used the TFSOM-DIM criteria to grade multimodal imaging by retrospective chart review, revealing significant differences in the range of risk scores between UM and choroidal nevi. 37
Other groups have independently identified risk factors for malignant transformation in melanocytic choroidal tumors. The Collaborative Ocular Melanoma Study (COMS) analyzed small choroidal lesions to find that thickness greater than 2 mm, basal diameter greater than 12 mm, presence of orange pigment, and absence of drusen and RPE changes were predictive of tumor growth. 38 Roelofs et al. developed a tumor categorization system, which provided a score for choroidal lesions based on five features: Mushroom shape, Orange pigment, Large size, Enlarging tumor, and Subretinal fluid. 39 Their study found these criteria to have a sensitivity of 99.8% in identifying melanocytic choroidal tumors at risk for malignant transformation. 39 These scoring systems emphasize the opportunity for objectivity in determining the distinction between the two lesions. Despite their potential in the prediction of malignancy, identifying these risk factors has traditionally been done through careful ophthalmic examination and image interpretation, which is subject to inter-observer variability. 40 The application of ML algorithms, such as the one used in our study, has the potential to provide a more accurate and efficient system to improve patient prognosis.
The use of ML as a tool for evaluating retinal lesions has gained interest in recent years. ML involves training algorithms to learn from data sets to act on future data. 41 While it has been shown to be useful in the early detection of diabetic retinopathy (DR), 42 , 43 its potential for predicting malignant transformation in UM has not yet been extensively explored.
In 2014, Roychowdhury et al. developed a novel, fully automated DR detection and grading system for automated screening and treatment prioritization, achieving a sensitivity of 100%, specificity of 53.26%, and an AUC of 0.904. 42 In a study by Lam et al. (2018), the use of ML in DR was augmented by developing a CNN to recognize and distinguish between mild and multi-class DR on color fundus images with enhanced recognition of subtle characteristics. 43
Supervised ML techniques have shown promise in classifying retinal disease type and stage. 44 In the context of UM, wide-field digital true color fundus cameras can capture a choroidal nevus and its associated features in a single photo, potentially making data labeling and ML training faster and more efficient. 30 This understanding suggests that ML may function as a valuable tool to assess small tumors and facilitate the prediction of malignant transformation.
Early detection and treatment of UM is crucial as metastasis events may occur early, and effective treatment can prevent its spread. 45 , 46 Despite the availability of effective treatments for the primary tumor, more than 50% of UM patients develop metastatic disease suggesting that UM may metastasize prior to the time of treatment. 45 , 47 Consequently, there is a need for identifying and treating small UM to minimize the number of melanocytic choroidal tumors that are observed and subsequently grow during the observation period. The importance of treating small UM was additionally emphasized by Eskelin et al., who measured doubling time in both untreated and treated metastatic UM and proposed that most metastases begin up to 5 years prior to primary tumor treatment. 48 Murray et al. retrospectively evaluated a case series of small UM undergoing early fine-needle aspiration biopsy combined with pars plana vitrectomy and endolaser ablation. 47 The study found no patients developed metastasis in the follow up period, suggesting that early treatment may lower the risk of mortality compared to observation alone.
Several studies have evaluated non-imaging biomarkers to better diagnose UM and predict prognosis, including some that employ ML techniques. 49 – 51 Serum biomarkers, including several differentially expressed proteins identified in UM gene signatures, have been associated with a worse prognosis in patients diagnosed with UM. 49 , 52 , 53 Furthermore, circulating tumor cells have been detected in patients without clinically detectable metastasis, indicating early spread and highlighting the need to identify prognostic biomarkers. 54 , 55 The search for these biomarkers underscores the importance of early detection of UM, potentially with the aid of ML for the diagnosis and management of UM. Small UM are a particularly important research subject due to the diagnostic challenge and potential for early local treatment to preserve vision and save lives. 56
Our study has important limitations including a small sample size of patients from a single institution, which restricts the generalizability of our findings. Furthermore, the small sample size likely limited the performance of the ML models. The development of high-performing ML models to characterize choroidal lesions would benefit from multi-institutional collaborations and potentially techniques to artificially increase the sample size based on existing data. In addition, technical limitations such as poor image quality or suboptimal feature extraction can also limit the accuracy of ML models. In our study, the Grad-CAM images sometimes focused on small regions of the lesions themselves rather than adjacent subretinal fluid or whole image during the classification task ( Fig. 3 ). To address this, segmentation of the specific regions based on the task, such as a lesion mask ( Fig. 4 ) for orange pigment and drusen or the lesion plus the surrounding retina for subretinal fluid, could improve model performance. Our current classification model in margin, orange pigment, and drusen have relatively low average AUC and high deviation, suggesting the need for further refinement ( Fig. 1 ). For the margin prediction model, segmentation of the optic nerve and lesion prior to training the model could be useful. In the case of orange pigment and drusen prediction models, the small size of these features may require alternative approaches such as cropping images into smaller tiles for classification, which could retain a higher resolution. Nonetheless, based on these limitations, expert knowledge is crucial to guide the development and use of these models to ensure that they are based on clinically relevant features and accurately reflect the underlying biology of the disease.
Our analysis provides proof of concept that ML can accurately identify risk factors for malignant transformation in melanocytic choroidal tumors based on a single UWF image or B-scan US image at the time of initial presentation. Further studies can build on these findings to improve the accuracy and applicability of these models in the clinical setting. ML has the potential to be developed into a clinically useful tool to inform and guide management decisions for melanocytic choroidal tumors and potentially save patient lives. | Objective
This study aims to assess a machine learning (ML) algorithm using multimodal imaging to accurately identify risk factors for uveal melanoma (UM) and aid in the diagnosis of melanocytic choroidal tumors.
Subjects and Methods
This study included 223 eyes from 221 patients with melanocytic choroidal lesions seen at the eye clinic of the University of Illinois at Chicago between 01/2010 and 07/2022. An ML algorithm was developed and trained on ultra-widefield fundus imaging and B-scan ultrasonography to detect risk factors of malignant transformation of choroidal lesions into UM. The risk factors were verified using all multimodal imaging available from the time of diagnosis. We also explore classification of lesions into UM and choroidal nevi using the ML algorithm.
Results
The ML algorithm assessed features of ultra-widefield fundus imaging and B-scan ultrasonography to determine the presence of the following risk factors for malignant transformation: lesion thickness, subretinal fluid, orange pigment, proximity to optic nerve, ultrasound hollowness, and drusen. The algorithm also provided classification of lesions into UM and choroidal nevi. A total of 115 patients with choroidal nevi and 108 patients with UM were included. The mean lesion thickness for choroidal nevi was 1.6 mm and for UM was 5.9 mm. Eleven ML models were implemented and achieved high accuracy, with an area under the curve of 0.982 for thickness prediction and 0.964 for subretinal fluid prediction. Sensitivity/specificity values ranged from 0.900/0.818 to 1.000/0.727 for different features. The ML algorithm demonstrated high accuracy in identifying risk factors and differentiating lesions based on the analyzed imaging data.
Conclusions
This study provides proof of concept that ML can accurately identify risk factors for malignant transformation in melanocytic choroidal tumors based on a single ultra-widefield fundus image or B-scan ultrasound at the time of initial presentation. By leveraging the efficiency and availability of ML, this study has the potential to provide a non-invasive tool that helps to prevent unnecessary treatment, improve our ability to predict malignant transformation, reduce the risk of metastasis, and potentially save patient lives. | Acknowledgments
We acknowledge the editorial assistance of the University of Illinois Chicago Center for Clinical and Translational Science (CCTS), which is supported by the National Center for Advancing Translational Sciences (NCATS), National Institutes of Health, through Grant Award Number UL1TR002003.
Financial Support
This work was funded by Research to Prevent Blindness, NY (Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago). The sponsor or funding organizations had no role in the design or conduct of this research. | CC BY | no | 2024-01-16 23:35:07 | Res Sq. 2023 Dec 21;:rs.3.rs-3778562 | oa_package/d5/ab/PMC10775400.tar.gz |
||
PMC10775408 | 38196633 | Introduction
Genome-wide associations studies (GWAS) in Alzheimer’s disease (AD) 1 and Parkinson’s disease (PD) 2 continue to identify an increasing number of genetic variants associated with the risk for developing these neurodegenerative disorders. However, the functional mechanisms underlying these associations remain largely unknown. An individual’s predisposition for developing AD or PD results from the complex interplay between genetic and non-genetic factors unfolding their effects over a person’s lifetime 3 , 4 . Examples of non-genetic factors are lifestyle variables (e.g. smoking, nutritional habits), environmental exposures (e.g. air quality, place of residence), and epigenetic mechanisms (e.g. DNA methylation [DNAm], histone modifications). In the context of complex disease research, epigenetic mechanisms play a particularly interesting role as they lie at the intersection between lifestyle/environment, genetics, and the regulation of gene expression 5 – 8 . In this context, DNAm is one of the most widely studied epigenetic marks owing to the advent of high-throughput technologies allowing the interrogation of DNAm profiles on a genome-wide scale.
One method to probe for epigenetic effects on disease risk is to conduct epigenome-wide association studies (EWAS). In AD, several such EWAS have been published using both brain 9 – 13 and peripheral tissues 14 – 16 highlighting a number of CpG sites that show differential DNAm with respect to disease state. Similar EWAS efforts have been completed in PD, e.g. using brain 17 and blood 18 . Collectively, these studies have led to the initial delineation of DNAm profiles associated with these disorders. Despite this progress, one major caveat of most published EWAS is that cause-effect relationships are difficult to discern, i.e. to distinguish whether the observed differential DNAm patterns contribute to pathogenesis, and as such occur before or early during the disease process (potentially highlighting disease-causing mechanisms) or whether they are the result of the disease process itself (e.g. due to the accumulation of pathologic protein aggregates). One possibility to solve this inference problem is to use genetics as a “common denominator” variable, e.g. in the context of Mendelian Randomization (MR) analyses. In MR, which is a type of instrumental variable analysis, genetic risk variants (e.g. from GWAS) are combined with genetic variants affecting the exposure of interest (here: DNAm), so called methylation quantitative trait loci (meQTLs), allowing to draw direct inferences on a causal relationship between the two. If interpreted carefully 19 , 20 , MR can effectively shed new light on the “causality uncertainty” in EWAS.
A prerequisite for meQTL-based MR analyses is the availability of meQTL GWAS data in the tissues of interest. For instance, meQTL GWAS have been performed in blood 21 , 22 , brain 23 – 25 , buccal 26 , and saliva samples 27 , 28 , although these initial reports used comparatively low-resolution DNAm microarrays (with n<500K CpG markers). One of the currently largest meQTL GWAS in terms of sample size was recently published for ~7,000 blood samples from two different ethnic descent groups, European and Asian 22 . DNAm profiling in that study was based on the Illumina Infinium HumanMethylation450 BeadChip (450K), capturing ~450K CpG sites. The study identified approximately 11.2 million genome-wide significant SNP-CpG pairs, of which ~50% showed high cross-tissue correspondence. Another noteworthy meQTL study was recently published by the Genotype-Tissue Expression (GTEx; www.gtexportal.org ) project which examined nine tissues from ~400 donors in parallel (breast, kidney, colon, lung, muscle, ovary, prostate, testis and whole blood) 29 . While considerably smaller in sample size than ref. 22 , the GTEx team used the Illumina successor array, (Infinium MethylationEPIC [EPIC], containing nearly 850K CpGs) 29 . Neither study performed systematic MR analyses to quantify the impact of DNAm on disease risk.
To close these gaps, we have created extensive genome-wide meQTL maps for three peripheral tissues (blood, buccals, and saliva) using the currently highest-resolution commercial DNAm microarray (EPIC) in sample sizes ranging from n=837 (saliva) to n=1,527 (buccals). Each of these meQTL GWAS identified between 11 and 15 million genome-wide significant (p < 10 −14 ) SNP-CpG pairs, a large fraction showing high cross-tissue correspondence. Next, we combined these novel meQTL GWAS results with recent risk GWAS for AD 1 (n= 1,474,097) and PD 2 (n= 788,989) using various different MR and colocalization analysis paradigms to assess whether and which of the hitherto reported GWAS signals might unfold their effects by affecting DNAm. Our novel results strongly suggest that the GWAS-based risk associations between up to five known AD/PD GWAS loci may at least partially be due to differential methylation. The complete and novel meQTL GWAS results, which provide the backbone of our study, are made freely available (URL: https://doi.org/10.5281/zenodo.10410506 ) for use in like-minded analyses on different phenotypes. | Methods
Methylation quantitate trait locus (meQTL) genome-wide association study (GWAS) and independent replication analyses.
Human samples
In this study, we analyzed a total of 2,592 samples from three independent datasets (“Berlin Aging Study II” [BASE-II] recruited in Berlin, Germany, “Barcelona Brain Health Initiative” [BBHI] recruited in Barcelona, Spain, and “Lifespan Changes in Brain and Cognition” [LCBC] recruited in Oslo, Norway) collected under the auspices of the EU-funded Lifebrain study 34 . Lifebrain participants for this study were selected based on the parallel availability of genome-wide SNP genotype and genome-wide DNA methylation data. Supplementary Table S19 provides a summary of demographic characteristics of the datasets used in this study. The use of DNA samples for genomics and epigenomics analyses in Lifebrain was approved by the ethics committee of University of Lübeck (approval number: 19–391A).
Berlin Aging Study II (BASE-II):
The portion of the BASE-II dataset used in this study consists of adult residents (age range: 23–88 years) from the greater metropolitan area of Berlin, Germany 35 , 36 . For this study, we included DNA samples collected from blood (n=1,058) and buccal swabs (n=837) collected at the second examination conducted between 2019–2020 as part of the “GendAge” project 36 . Of these, a total of n=830 BASE-II participants contributed DNAm data from both blood and buccal swabs, and this overlap was taken into account in analyses assessing the correspondence of meQTL effects in these two tissues ( Supplementary Table S19 ). The BASE-II/GendAge studies were conducted in accordance with the Declaration of Helsinki and approved by the ethics committee of the Charité—Universitätsmedizin Berlin (approval numbers: EA2/144/16, EA2/029/09).
Barcelona Brain Health Initiative (BBHI):
The Barcelona Brain Health Initiative (BBHI) is an ongoing, longitudinal study recruiting participants from the greater metropolitan area of Barcelona, Spain, with the focus on evaluating factors determining brain health 37 . For this study, there were buccal samples from n=372 BBHI participants (age range: 30 to 67 years) available which were subjected to genome-wide SNP and DNAm profiling ( Supplementary Table S19 ). Collection of BBHI samples was conducted in accordance with the Declaration of Helsinki and following the recommendations of the “Unió Catalana d’Hospitals” with written informed consent from all subjects. The protocol was approved by the Unió Catalana d’Hospitals (approval number: CEIC 17/06).
Lifespan Changes in Brain and Cognition (LCBC):
This dataset comprises a collection of n=1,155 participants (age range: 20–81) recruited mostly in the larger metropolitan area of Oslo, Norway, by investigators at LCBC, as well as through other collaborations within Norway. The sample comprises phenotypically well-screened individuals with comprehensive neuropsychology, MRI, lifestyle, health, biomarkers, and other measures. For this study, DNA extracts used for genome-wide SNP genotyping and DNAm profiling originated from n=318 buccal swabs and n=837 saliva samples, which were non-overlapping ( Supplementary Table S19 ). The studies were approved by the Regional Ethical Committee of South East Norway. Written informed consent was obtained from all participants.
Genome-wide SNP genotyping, quality control, and imputation
DNA for all samples was extracted using standard procedures as described previously (BASE-II & BBHI: ref. 14 ; LCBC: ref. 38 ). Genome-wide SNP genotyping was performed using the Global Screening Array (GSA; Illumina, Inc., USA) at the Institute of Clinical Molecular Biology at UKSH Campus Kiel on an iScan instrument according to the manufacturer’s recommendations. Genotype calling, quality control (QC) and imputation steps were performed using an automated bioinformatics workflow described previously 39 , 40 . Briefly, genotype determination from the raw intensity data was performed using GenomeStudio (Illumina, Inc., version 2.0.2), QC with the PLINK program (version 1.9) 41 , 42 . Imputation of untyped variants was performed with MiniMac3 43 software using the “Haplotype Reference Consortium” (HRC; v1.1 [EGAD00001002729 including 39,131,578 SNPs from ~11K individuals]) reference panel 44 . Finally, allele dosages (i.e. genotype probabilities) of ~39 million SNPs per proband were available for post-imputation QC. This entailed filtering at both the SNP and individual levels using the following criteria. SNP-filtering : SNPs were excluded with low imputation quality score (r 2 < 0.7), minor allele frequency (MAF) below 5%, genotyping rate below 98%, and significant deviations from Hardy-Weinberg Equilibrium (HWE) in control individuals (p < 5 × 10 −6 ). DNA sample filtering : individual-level genotyping data were excluded in case of low genotyping efficiency (< 98%), discrepancies between genetic and clinical recorded sex, duplicated DNA samples, cryptic relatedness (--king-cutoff 0.025), and samples with implausible heterozygosity (mean ± 6 × SD). To correct for population stratification, genetic ancestry was mapped onto genotype data from the 1000 Genomes project (using that study’s five “superpopulation” codes), followed by principal component analysis (PCA) performed in PLINK (v2.0). Only samples clustering to the “CEU” population cluster were retained for analyses. Genomic location of SNPs throughout this manuscript are based on human genome build GRCh37/hg19.
DNA methylation measures and quality control in blood, buccal and saliva tissues
Genome-wide DNA methylation (DNAm) profiles were generated in the same individuals whose samples were also used in the genotyping experiments. DNAm was measured using the Infinium Human MethylationEPIC array (Illumina, Inc., USA) at the Institute of Clinical Molecular Biology at UKSH Campus Kiel on an iScan instrument according to the manufacturer’s recommendations. QC and data processing were performed using the same procedures as described previously 12 , 14 . Briefly, data pre-processing was performed in R (version 3.6.1) using the package bigmelon (version 1.22.0) with default settings 45 . Cell-type composition estimates were obtained with the R package EpiDISH (version 2.12.0) 46 and used for correction of DNAm β-values. Samples were excluded from the analysis if (a) the bisulfite conversion efficiency was below 80% according to the bscon function in the bigmelon package, (b) the sample had a beadcount < 3 in more than 5% of all probes, (c) the sample had a detection p-value below 0.05 in more than 1% of all probes, (d) the sample was identified as an outlier according to the outlyx function in the bigmelon package using a threshold of 0.15, (e) the sample showed a large change in β-values after normalization according to the qual function in the bigmelon package with a threshold of 0.1, (f) the sample showed a discrepancy between predicted sex according to the Horvath multi-tissue epigenetic age predictor and reported sex, or (g) there was a greater than 70% discrepancy between genotypes of 42 SNPs determined concurrently from the EPIC and GSA SNP genotyping array. All samples were normalized with the dasen function of bigmelon. Further details on QC and data processing can be found in refs. 12 and 14 from our group. Genomic location of CpGs throughout this manuscript are based on human genome build GRCh37/hg19.
Identification of genetic factors influencing methylation, (meQTL GWAS)
After QC of both genome-wide SNP genotypes and DNAm patterns, we performed meQTL genome-wide association analyses separately for blood, buccal mucosa, and saliva using the R package MatrixeQTL 47 . In detail, we applied linear regression models including sex, the first ten principal components of a PCA assessing genetic ancestry, and the first five principal components of DNAm levels at pruned CpGs as covariates (see ref. 12 , 48 and supplementary methods). For the buccal datasets an additional dummy variable was introduced to adjust for laboratory batches. We retained only test statistics from SNP-GpG pairs showing p-values less than 0.05. To account for multiple testing, we applied the same study-wide threshold as in Hawe et al. 22 (i.e. α = 1×10 −14 ), which was based on the ~4.3 trillion tests performed in that study conducting a meQTL GWAS in one tissue (blood). While here we performed approximately three times the number of tests owing to the analysis of two additional tissue types, we note that not all of these were independent owing to the correlation structures between SNPs (in Europeans there are approximately 1M independent SNPs at MAF≥0.05) and CpGs (the EPIC array contains approx. 530K independent CpGs 49 . Assuming that the meQTL effects are completely independent across the three tissues used in our study, this would amount to a total of 3×1Mx530K=1.7×10 12 independent tests and an effective Bonferroni-corrected α-level of 0.05/1.59×10 12 = 3.1×10 −14 which is slightly less conservative than the level actually applied (i.e. α = 1×10 −14 ). Following the analyses by Hawe et al. 22 , we separated our findings into cis meQTL (SNP-CpG distance within 1 Mb), lr- cis meQTL (>1 Mb apart but on the same chromosome) and trans meQTL (associations between SNPs and CpG sites on different chromosomes).
Independent replication analyses
To assess the replicability of meQTL results in buccal tissue, we were able to use two independent datasets in this study: one discovery dataset containing samples from the Berlin Aging Study II (BASE-II) and another replication dataset containing samples from the Barcelona Brain Health Initiative (BBHI) and the Centre for Lifespan Changes in Brain and Cognition (LCBC). For the meQTL GWAS, BBHI and LCBC were analyzed jointly adjusting for center using a dummy variable. We assumed that a meQTL from the discovery dataset was replicated if it showed evidence of association at P values <0.05 and consistent direction of effect in the replication dataset.
For meQTL results replication in blood tissue, we have downloaded the results from Hawe et al. 22 who performed a meQTL GWAS in blood identifying of 11,165,559 study-wide significant meQTLs. To calculate the replication rate, we considered only the SNPs and CpGs that also remained in our analysis after QC. For the SNP-CpG pairs with a P value above 0.05, the test statistic was recalculated and saved. We considered a meQTL as replicated if they showed evidence for association at p-values <0.05 and an effect direction consistent with that reported in Hawe et al. 22 .
For salivary tissue, we could not estimate the replication rate because no additional data were available.
Comparison of meQTL findings across three tissue types: blood, buccal, saliva
For comparison, we used meQTLs results from Hawe et al 22 to estimate the replication rate and correlation of effect estimates between the different tissues. These previous results were compared to post-QC SNPs-CpGs pairs from each dataset analyzed here (i.e. buccal and saliva). For the SNP-CpG pairs with a P value above 0.05, the test statistic was recalculated and saved. We considered a meQTL as replicated if they showed evidence for association at p-values <0.05 and an effect direction consistent with that reported in Hawe et al. 22 . Furthermore, we calculated the correlation of meQTL effect estimators using Pearson’s method.
Identification of long-range cis and trans meQTL regions shared across tissues
We identified the shared regions by annotating the significant meQTL SNPs to genes using ANNOVAR software 50 based on their physical position on the chromosomes (hg19/GRCh37) as provided on the UCSC genome browser ( http://hgdownload.cse.ucsc.edu/goldenPath/hg19/database/ensGene.txt.gz ). For comparison of tissue-specific regions at the SNP level, we estimated the top 1% SNPs across all detected long-range cis and trans associations and annotated the top SNP to the most common gene in a +/−10 Mb region. When two genes are equally frequent in one region, we chose the gene previously reported in Hawe at al. 22 , when present. We also looked at the most frequently annotated genes within all lr- cis and trans SNPs in each dataset and compared the top 20 genes within blood, buccal, and saliva tissues.
Linking DNA methylation and Alzheimer’s and Parkinson’s disease using Mendelian randomization
Summary data-based Mendelian randomization (SMR) analysis
To run an initial test for association between AD/PD risk and DNAm levels across all three analyzed tissues (blood, buccal, and saliva) and to prioritize downstream two-sample MR analyses, we applied the Summary data–based Mendelian Randomization (SMR) approach 30 . Only SNP-CpG pairs attaining study-wide genome-wide significance (i.e. p-values of the top associated cis meQTL were <10 −14 ) were considered in this analysis. The disease-specific data were retrieved from summary statistics of the two most recent GWAS meta-analyses on risk for AD 1 (n total = 487,511) and PD 2 (n total = 482,730). To account for multiple testing within this arm of our study, we considered the total number of unique genome-wide significant methylation CpGs in cis , that were included in the analysis from blood (n=118,955), buccal (n=92,694), and saliva tissues (n=100,233) yielding a total number of n= 311,882 comparisons. Accordingly, the Bonferroni adjusted α level was set to α = 0.05/311882 = 1.6×10 −7 . Whenever more than three SNPs were in linkage disequilibrium (LD; r 2 > 0.1 , 1,000 kb) with a cis -SNP, a heterogeneity test (HEIDI) was performed to distinguishing functional association from linkage, as implemented in the SMR-Tool 30 .
For gene prioritization from SMR, we assigned the significant CpGs to genes based on the information in the Infinium MethylationEPIC manifest file (version 1.0 B5, Illumina, Inc., USA).
Systematic two-sample Mendelian randomization (MR) analyses
To test for potential causal relationships between DNAm and AD/PD, we examined SMR-prioritized regions, i.e. those showing with SMR p<1.6×10 −7 , at least one genome-wide significant meQTL SNP (p<1×10 −14 ), and no evidence for significant heterogeneity (HEIDI test p>0.05) by two-sample Mendelian randomization (MR) analyses. Two-sample MR was performed using the R package MendelianRandomization 31 running four analysis models: simple median 51 , weighted median 51 , inverse variance weighted (IVW) 52 and Egger regression 53 . Each model makes different assumptions and uses different strategies to avoid false positive causal inferences. As recommended by the authors, we only consider those MR results further which show consistently significant signals across all four models.
In addition to testing regions prioritized by the SMR method, we also analyzed the 10 most frequently associated CpGs with SNPs in independent regions. Genes corresponding to these CpGs were annotated using the Infinium MethylationEPIC Manifest file (version 1.0 B5, Illumina, Inc., USA). CpGs in intergenic regions (i.e. not annotated to any gene) were assumed to represent one independent region each. Only independent (r 2 <0.1, 1,000 kb) SNPs present in at least one GWAS summary statistic (i.e. AD and/or PD) with a MAF >0.05 and showing p-values <1×10 −14 in the meQTL GWAS were included in these analyses. In these high-frequency regions, MR was performed if there were at least 3 independent SNPs after outlier correction with MR-PRESSO 32 . To estimate study-wide significance for this arm of our study we used Bonferroni’s method considering the total combined number of tests performed in AD (n=193) and PD (n=146), i.e. α = 0.05/349 = 1.47×10 −4 .
Sensitivity analyses
We performed an extensive sensitivity analysis using several methods. On one hand, we tested for heterogeneity, as implemented in the MendelianRandomization 31 package and assume that instrumental variables with a p-value greater than 0.05 reject heterogeneity. On the other hand, we performed a global test to identify outliers in the data as implemented in the MR-PRESSO tool 32 . If the global test yields a p-value greater than 0.05, we assume that the data are consistent and have no local outliers. Finally, the intercept parameter is another indicator representing the average pleiotropic effect of a genetic variant 31 . If the intercept p-value remains greater than 0.05, we assume that there is no pleiotropy.
In two-sample MR analysis, the selection of correlated instrumental variables (SNPs) within a gene can lead to numerically unstable estimates of the causal effect 33 . For this reason, we recalculated the MR using squared correlation up to r 2 ≤0.01 for significant CpGs identified with (r 2 <0.1, 1,000 kb) SNPs. We lowered the r 2 threshold to the point where less than 3 SNPs remained for analysis, or we achieved an r 2 value of 0.01.
We used colocalization as part of Mendelian randomization sensitivity analysis, testing assumptions about instrumental variables (SNPs) for a given genetic region (gene and CpGs within) using the R-package susieR 54 . In the analysis, we included all SNPs (not filtered for LD) used for MR in cis regions. If there is strong evidence that exposure and outcome are influenced by different causal variants, then it is implausible that variants in that region are valid instrumental variables for exposure 19 . | Results
Methylation quantitative trait locus (meQTL) genome-wide association studies (GWAS) and independent replication analyses
For each of the three available tissues, blood (n=1,058), buccals (n=1,527) and saliva (n=837), we performed meQTL GWAS analyses testing approximately 5.5 million common (minor allele frequency [MAF] ≥0.05) SNPs for association with approximately 750,000 CpG probes after QC ( Figure 1 ). In total, this procedure resulted in over 12 trillion statistical tests, of which approximately 1.7 trillion were independent, resulting in a conservative study-wide α-level of 1×10 −14 ( Methods ). Using this threshold, we identified between 11 and 15 million genome-wide significant SNP-CpG pairs in each dataset ( Figure 2 ). In blood, approximately 92% of meQTL were detected in cis (i.e. SNP-CpG distance within ±1MB on the same chromosome), whereas 4% meQTL were in long-range cis (lr-cis, i.e. SNP-CpG distance >1Mb but located on the same chromosome), and 4% meQTL were in trans (i.e. SNP and CpG sites were located on different chromosomes). Comparable numbers of meQTLs were identified in GWAS analyses of buccal and saliva samples ( Table 1 ; Supplementary Tables S1 – S3 ). To the best of our knowledge, our study comprises the largest meQTL GWAS available for these latter two tissues to date.
For buccal swab specimen, we had two independent datasets ( Figure 1 , Supplementary Figure S2 ) available allowing us to assess the degree of replication for meQTL associations within that particular tissue. These analyses revealed a very high (~94%) degree of replication of SNP-CpG pairs showing genome-wide significance in BASE-II samples (n = 837) when assessed in the combined BBHI-LCBC-buccal (n=690) dataset. For this purpose, “replication” was assumed for SNP-CpG pairs showing the same direction of effect with at least nominal significance (i.e. p<0.05; Methods ) as suggested previously 22 . To assess replication in blood, we compared our meQTL results to the findings recently reported by Hawe et al. 22 . Of the 11,165,559 genome-wide significant “cosmopolitan” meQTLs showing both ancestry-specific replication in samples from Europe and Asia 22 , 7,612,751 were also tested in BASE-II blood samples; of these 7,405,579 (~97%) showed consistent effect directions with at least nominal significance (p<0.05). Furthermore, we found a highly significant correlation of effect size estimates between significant (p<10 −14 ) cosmopolitan meQTLs results from Hawe at al. 22 and our analyses (r= 0.96, p<2.2×10 −16 ). While no independent dataset of sufficient size was available to assess replication of meQTL effects in saliva, overall, these findings demonstrate that our meQTL results are highly robust and, for blood, are in good agreement with the literature.
Comparison of meQTL findings across three tissues: blood, buccals, saliva
Next, we addressed the question as to how stable meQTL effects were across tissue types by estimating cross-tissue correspondence rates (using the same criteria defining replication outlined above). Since blood and buccal tissue data were partially obtained from the same individuals (i.e. participants of the BASE-II study; Figure 1 ) we used the independent blood meQTL results recently reported by Hawe et al. 22 to compare replication and correlation of effect direction with the other tissues ( Figure 3B – D ). Overall, we observe high degrees of cross-tissue correspondence for cis SNP-CpG pairs ranging from 71 to 94% across all three tissue types. The highest correspondence (94/96/97%) was seen when comparing cis /lr- cis / trans SNP-CpG pairs in buccal vs. saliva specimen ( Figure 3D ), while cis /lr- cis / trans SNP-CpG pairs in blood vs. buccals ( Figure 3B ) showed the lowest correspondence rates (71/74/74%). In contrast, the strongest correlation of effect sizes (r=0.92/0.92/0.90) was observed when comparing cis /lr- cis / trans SNP-CpG pairs in blood vs. saliva specimen ( Figure 3C ), while cis /lr- cis / trans SNP-CpG pairs in blood vs. buccals ( Figure 3B ) showed the weakest correlations (r=0.71/0.80/0.82).
Next, we calculated how many of the genome-wide significant SNPs-CpGs pairs correspond across all three and at least two out of three tissue types and what proportion of SNPs-CpGs pairs is present in only one tissue (see Supplementary Figure S3 ). We identified a much larger proportion of cis meQTLs (67%) that are not tissue-specific vs. those that are tissue-specific (6%). This observation is similar to published cross-tissue meQTLs results 29 , although these did not investigate buccal or saliva samples. Overall, 67/78/86% of the cis /lr- cis / trans SNP-CpG pairs, respectively, showed significant signals in all three tissues. This suggests that meQTL associations become less tissue-specific with increasing distance between CpG site and SNP.
Identification of long-range cis and trans meQTL regions shared across tissues
Previous work suggested that blood meQTL SNPs acting in trans often regulate a large number (several hundreds to thousands) of CpGs located in the same functional unit (i.e. gene) of the genome arguing for shared molecular effects 22 . With the data from our study we were able to independently assess these findings in blood and extend them to buccal and saliva specimens. To this end, we first annotated trans meQTL SNPs to genes ( Methods ), which resulted in 9,302 genes in blood (vs. 6,816 in buccals, 7,154 in saliva). We then examined whether the 20 most frequently annotated genes were also present in the 162 (~top 2% from 9302 genes) most frequently annotated genes in the other tissue types (see Table 2 and Supplementary Table S4 and S5 ). In general, all “top-20” genes from one tissue are observed to occur among the top ~2% (n=162) genes of the other tissues, except NFKB1 , which had no significant trans meQTL in buccal tissue, and RP11–876N24.2 which did not show up in blood. Conversely, trans meQTL SNPs were most frequently annotated to MAD1L1 in all three tissues.
Analogously, we examined genes acting in long-range cis regions. For these SNPs, we identified 3610 genes in blood (vs. 3189 in buccal, 3162 in saliva). Similar to trans regions, all top 20 genes range among the top signals in all three tissue types, i.e. all are observed in the top 2% (often even among the top 50) genes of the other tissues ( Supplementary Tables S6 – S8 ). The meQTL SNPs in the lr- cis region were most frequently annotated to MSRA and RP11–574M7.2 in all three tissue types.
Additionally, and following a similar visualization as in Hawe et al. 22 , we annotated the top 1% of all significant trans SNPs to the most frequent meQTL genes within the +/−10 Mb region, to identify top independent regions ( Figure 4A – C ). This annotation revealed blood to have the highest number of independent regions (n=22), followed by buccal tissue (n=21) and saliva (n=16). While many of the meQTL act in a cell type dependent manner, i.e. a substantial proportion of the annotated top 1% independent regions in blood is not included among the independent top 5% regions in buccals and saliva (i.e. 10/22= 45% and 2/22=9%, respectively), the results of the blood meQTL analyses are highly similar to those from Hawe at al. 22 . In that study, 12 top independent regions were annotated, of which only one ( LINC00273 ) is not included in the top 5% independent regions from our blood meQTL GWAS. A similarly high correspondence rate was observed within the two independent buccal datasets, where all top 1% regions in BASE-II are included in the top 5% of independent regions from the BBHI-LCBC-buccal dataset ( Supplementary Figures S2 ).
Causal relationships between DNA methylation and neurodegenerative diseases using summary data-based Mendelian randomization (SMR) analysis
In the next stage of our project, we addressed the question whether there is a causal link between DNAm and AD/PD risk using cis meQTL SNPs as instrumental variables via MR analyses. Disease-specific risk effects and association evidence was extracted from two recent GWAS on the respective disorders, i.e. the study by Bellenguez et al. (n=1,474,097) 1 for AD and by Nalls et al. (n=788,989) 2 for PD. We used the SMR tool to prioritize significant SNP-CpG signals identified in our meQTL GWAS analyses for follow-up using two-sample MR analyses (next section).
Overall, 118,757 SNPs overlapped in the AD GWAS and blood meQTL GWAS summary statistics and could be used in SMR (total n=311,882 unique genome-wide significant meQTL CpGs in cis across all tissue types). In blood, SMR results suggest a potential and study-wide (using a Bonferroni-corrected α of 0.05/311,882=1.6×10 −7 ) significant causal relationship between DNAm and AD at 220 SNP-CpG pairs ( Supplementary Tables S9 ). Of these pairs, 64 (of 220) show evidence for a single shared underlying causal variant affecting both DNAm in blood and AD risk (i.e. P>0.05 in the HEIDI test). These 64 SNP-CpG pairs map to 42 independent loci (25 map to genes, while 17 are located in intergenic regions; Supplementary Table S9 ). Equivalent analyses for the meQTL analyses in buccal (saliva) tissue prioritized 156 (176) study-wide significant SNP-CpG pairs and 33 (12) show evidence for a single shared underlying causal variant across 24 (11) loci ( Supplementary Tables S10 – S11 ; Supplementary Figure S4 ).
For PD, a total of 118,945 SNPs overlapped between meQTL in blood and the PD risk GWAS and were used in the SMR analyses. In blood, we identified 114 SNP-CpG pairs with a potential causal relationship with PD (p<1.6×10 −7 ; Supplementary Tables S12 – S14 ) and 13 SNP-CpG pairs in blood showed evidence of a single shared underlying causal variant affecting both DNAm and PD risk (i.e. P>0.05 in the HEIDI test). These 13 SNP-CpG pairs map to 13 loci (6 map to genes, while 7 are located in intergenic regions; Supplementary Table S12 ). Of all 114, equivalent SMR analyses for the PD and meQTL analyses in buccals (saliva) prioritized 101 (101) study-wide significant SNP-CpG pairs of which 10 (7) show evidence for a single shared underlying causal variant across 10 (7) loci ( Supplementary Tables S12 – S14 ; Supplementary Figure S5 ).
Follow-up of SMR results by two-sample Mendelian randomization (MR) analyses
Genes identified as significant by SMR are not necessarily causally related to the phenotype in question, they merely stand a higher chance of being in such a relationship (hence our use of SMR as a “prioritization” approach) 30 . Potential causal relationships were assessed by two-sample MR using the genes / loci prioritized by SMR using the MendelianRandomization tool 31 . In addition, we tested the top 10 CpGs emerging from the meQTL GWAS analyses for cis , lr- cis and trans loci in each of the three tissues (i.e. an additional 10×3×3=90 CpGs) as these capture particularly strong genetic effects on DNAm at these sites that may be missed by the standard SMR paradigm. A summary of our MR workflow and numbers can be found in Supplementary Figure S6 .
For AD, we identified 42 independent regions in blood (24 in buccal and 11 in saliva; p<1.59×10 −7 & pHeidi>0.05) using SMR analysis. For MR analysis, we utilized a larger CpG selection to include all CpGs in the “prioritized” region, based on inclusion of all CpGs mapped to each region for which there is at least one significant meQTL SNP (p<10 −14 ). This selection procedure resulted in 297 prioritized CpGs in blood tissue (214 in buccal, 60 in saliva). For each of these CpGs, we performed two-sample MR analysis if at least 3 independent meQTL SNPs (r 2 <0.1, 1000 kb and P<10 −14 ) overlapping with SNPs from the respective GWAS summary statistics and not identified as outliers using the MR-PRESSO tool 32 were available. Overall, this procedure resulted in sufficient data for a total of 193 MR analyses in AD ( Supplementary Figure S6 ). In total, we identified nine (four in blood and five in buccal tissue) putative causal CpGs with p-values falling below the multiple-testing corrected threshold for this part of our study (p<1.47×10 −4 ; Supplementary Figure S6 ) in all four MR models tested ( Supplementary Figure S7 ). One example is KANSL1 , which showed highly significant evidence for a causal relationship (i.e. a positive sign in the effect size estimate) with AD risk in blood (cg09860564, smallest p = 4.08×10 −13 ) and buccal tissue (cg17642057, smallest p = 1.21×10 −12 ). KANSL1 is functionally interesting as it maps to an inversed haplotype region on chr. 17q21 into the immediate vicinity of the gene encoding microtubule associated tau protein ( MAPT ), whose accumulation as neurofibrillary tangles represents a neuropathological hallmark of AD 1 . The other significant two-sample MR signals were elicited by individual CpG sites in PSMC3 (blood; smallest p = 1.2×10 −12 ) and PRDM7 (buccal; smallest p = 9.6×10 −13 ), as well as two CpGs in TSPAN14 (buccal; smallest p = 2.72×10 −21 ). In addition, there were three (of which two were observed in blood) CpGs from intergenic regions showing significant effects ( Supplementary Figure S7 ).
In PD, the equivalent number of CpGs assessed by two-sample MR analyses was 146 ( Supplementary Figure S6 ). From these, we identified 15 putative causal CpGs showing study-wide significant evidence for potential causal effects of DNAm on PD risk across all three analyzed tissues ( Supplementary Figure S8 ). Interestingly, all PD CpGs were located in the inversed haplotype region on chr. 17q21 in and near MAPT (within a ~500kb window encompassing CRHR1 , MAPT, and KANSL1 ). Of note, none of the significant 17q21 CpGs in PD overlapped with those that emerged in this region for AD. No other PD loci outside the 17q21 region were highlighted by two-sample MR using our novel meQTL catalogs.
Sensitivity analyses on the two-sample Mendelian randomization results
MR results can be biased towards false-positive findings by residual correlation between SNPs used as “independent” instrumental variables 33 . The default correlation (i.e. linkage disequilibrium) threshold used in the primary analyses here was r 2 ≤0.1, which represents a commonly applied cut-off in this type of MR setting 19 . To assess the stability of our MR results and to minimize potential bias due to residual correlation among SNPs, we recalculated all significant two-sample MR results using more stringent correlation thresholds, down to r 2 ≤0.01 ( Methods ). As can be seen from Supplementary Tables S15 & S16 , the number of usable independent SNPs dropped below the recommended minimum of three for many CpGs and only left one CpG each (AD: cg20307385 in PSMC3 [ Figure 5 ; Supplementary Table S15 ]; PD: cg07936825 in MAPT [ Supplementary Table S16 ]) for the MR analyses at the most stringent threshold of r 2 ≤0.01. Both of these CpGs continued to show strong and consistent association by MR across all four models used. Using additional r 2 thresholds between 0.1 and 0.01 enabled additional sensitivity MR analyses for a total of 13 out of 28 CpGs ( Supplementary Tables S15 & S16 ). In the majority of cases, the initial MR results were confirmed, although many showed less significant effects, likely owing to the lower number of instrumental variables (i.e. SNPs) available at these more stringent thresholds. Only the MR results for four CpGs (1 AD, 3 PD) yielded non-significant (p>0.05) results in at least one MR model using an r 2 threshold <0.1 ( Supplementary Tables S15 & S16 ). While this could indicate a possible bias in the primary MR analyses at these CpGs, we emphasize that non-significance only affected the “simple model” in each instance and the support remained highly significant even for these four CpGs in the three remaining MR models. Thus, by and large, these sensitivity analyses do not indicate the presence of a strong bias in our MR results, at least not for the CpGs where additional testing was possible.
As an additional line of sensitivity analyses, we tested for colocalization of GWAS results, i.e. SNPs representing both meQTLs and risk variants. In general, evidence for colocalization (which indicates that the same variant is driving the meQTL and disease associations) is regarded as supportive of a significant MR finding 19 . In AD, support for colocalization was observed for four of the nine MR CpGs ( Supplementary Table S17 ). This relates to CpGs in PSMC3 (1 CpG) and TSPAN14 (2 CpGs) as well as an intergenic probe on chromosome 11q14 (cg04441687), which is located approx. 100kb downstream of PICALM . Thus, for these four CpGs, all statistical evidence accrued in the multipronged analyses performed in this study unequivocally points to a causal relationship between DNAm and risk for disease. Interestingly, all four of the implied regions also show evidence for differential methylation in AD vs. control brain samples in the recent brain EWAS meta-analysis by Smith et al. 11 ( Supplementary Table S17 and next section). In PD, where we identified a large number of consistent and highly significant two-sample MR signals for CpG sites in a ~400kb region on chromosome 17q21 (encompassing CRHR1-MAPT-KANSL1 ), none of the colocalization results favored the presence of a shared variant (H4), but in all instances pointed to the existence of two separate variants underlying DNAm and PD risk (H3; Supplementary Table S18 ). The most obvious scenario in which such “conflicting” results can occur is that the exposure and outcome have distinct causal variants that are in linkage disequilibrium, which in return may signify that one of the MR assumptions may be violated 19 .
Comparison of novel meQTL-based MR results and brain-based EWAS for AD and PD
As outlined in the introduction, the application of MR to imply causal relationships between exposure (here: DNAm) and outcome (here: AD/PD risk) can effectively circumvent the problem of “causality uncertainty” of EWAS. In this regard our study leverages the oftentimes substantially larger sample sizes typically used for disease risk GWAS (here ranging from 788,989 in PD 2 to 1,474,097 in AD 1 ) and meQTL GWAS (here exceeding 1,500 samples for blood). The sample sizes for the largest (to our knowledge) primary EWAS in brain samples for AD are n~960 11 and n~320 for PD 17 . Notwithstanding the (much) reduced power of these primary EWAS, evidence for differential DNAm at overlapping loci from these studies may still be regarded as supportive of the MR-based findings derived from our data. To this end, it is comforting to note that four of the eight top MR-based loci identified here are also supported by EWAS, at least in AD ( Supplementary Tables S17 [AD] and S18 [PD]).
First, TSPAN14 was reported as one of the top EWAS findings in the recent meta-analyses performed by Smith et al. 11 . In that study, the authors found evidence for differential DNAm at cg16988611 (p=1.9×10 −10 in prefrontal cortex, and p=9.98×10 −12 in the cross-cortex analyses). While the two lead CpGs in this locus in our study (i.e. cg24699150 and cg22345419) were not analyzed by Smith et al. 11 since neither probe is included on the 450K array, it is reasonable to assume that these two signals represent the same underlying effect. Second, while for PSMC3 , Smith et al. 11 reported no significant results for our primary CpG (cg20307385), they did find genome-wide significant evidence for association with cg06784824 (p=3.0×10 −8 ) which is located ~75kb proximal in the last exon of SPI1 . In the most recent AD risk GWAS 1 , this general locus is annotated to extend from SPI1 to CELF1 , and PSMC3 is one of five genes mapping into this interval (see Supplementary Fig. 16 in ref. 1 ). This could indicate that there are two independent DNAm sites acting in this region (one near the 3’ end of SPI1 and one within PSMC3 ) or that they are pointing to the same signal that is also highlighted by the AD GWAS. Third, our MR signal near PRDM7 (elicited by cg16611967) is directly confirmed by Smith et al. 11 , who report at least nominal (p<0.05) evidence for differential methylation with this probe in their cross-cortex EWAS analyses. The much stronger statistical support here (p-values ranging from 3×10 −3 to 6×10 −13 ; Supplementary Table S15 ) is likely afforded by the much larger sample sizes, and hence power, of our analyses. Fourth, our MR signal on chromosome 11q14 elicited by cg04441687 ~200kb upstream of PICALM cannot be directly compared with Smith et al. 11 since this CpG site is missing from the 450K array. However, they report cg07180834, which maps approx. 30kb p-ter from our signal, to be differentially methylated in prefrontal cortex (p=0.001), implying the same general region surrounding our finding. Lastly, Smith et al. 11 report genome-wide significant evidence for differential methylation with at least two CpGs in the general MAPT region on chromosome 17q21, i.e. cg20864568 (p=9.93×10 −8 in prefrontal cortex) and cg15194531 (p=1.74×10 −8 in the cross-cortex analyses), suggesting that the link between this locus and AD risk may, indeed, be mediated by differences in DNAm. While our MR results for this locus clearly imply causality underlying this link, we note that the missing evidence for colocalization (see previous section) may indicate some bias in the MR analyses.
While for PD essentially all of our MR results could be directly compared to the EWAS by Pihlström et al. 17 who also used the EPIC array, none of our PD-related results within the KANSL1/MAPT region on chromosome 17q21 showed evidence for differential DNAm in that study. This may, at least partially, be due to the exceedingly small sample size (n~320) used in that EWAS.
In summary, there is either direct or indirect support for five of our seven MR-based DNAm loci from a recent AD EWAS using samples from different human brain regions. This can be seen as evidence for independent validation on two levels: i) Validating the overall approach taken here, i.e. combining peripheral (non-brain) meQTL data with AD genetics to derive mechanistically relevant links acting in the brain, and ii) for the involvement of DNAm underlying the well-established AD risk associations at these five loci. We note that the non-confirmation by EWAS of the remaining two intergenic CpGs (cg04043334 at 10q23 and cg02521229 at 11q12) does not necessarily imply the absence of a causal relationship owing to the much smaller genomics resolution of the primary EWAS 11 . We also note that the former of these two intergenic CpGs (cg04043334) maps ~143kbp of TSPAN14 , which is one of the major brain EWAS signals in the study by Smith et al. 11 | Discussion
In this work, we performed extensive genome-wide mapping of meQTLs in three peripheral human tissues of which two (buccal and saliva) were not sufficiently covered in comparable previous efforts. After performing more than 12 trillion statistical tests, we identified between 11 and 15 million genome-wide significant SNP-CpG pairs in each tissue. Most of these (~90%) were located in cis while long-range cis and trans effects comprised approx. 5% each of the remaining signals. In a second step, we combined these novel meQTL GWAS results with large risk GWAS for AD and PD using a multipronged MR and colocalization analysis approach to assess whether any of the hitherto reported AD/PD GWAS signals might unfold their effects by affecting DNAm. These analyses strongly suggest that the GWAS-based risk associations between PSMC3 , PICALM , TSPAN14 in AD may (at least partially) be due to differential DNAm at or in the vicinity of these genes. In addition, there is strong – albeit less unequivocal – support for causal links between differential DNAm at PRDM7 in AD as well as at KANSL1/MAPT in AD and PD. To facilitate like-minded analyses in other complex human traits, we made the complete and entirely novel meQTL GWAS results freely available to the scientific community (URL: https://doi.org/10.5281/zenodo.10410506 ).
Our study has several strengths, which include: i) utilizing comparatively large sample sizes across the three different tissue types (resulting in the largest meQTL GWAS ever performed in buccal and saliva specimens); ii) using the highest resolution DNAm microarray currently commercially available; iii) employing stringent statistical thresholds to declare genome-wide significance; iv) assessing and establishing independent replication for top meQTLs in buccal and blood tissue; and v) applying a multipronged and state-of-the-art analysis approach to infer potential causality between DNAm and disease associations for two common neurodegenerative disorders each based on the largest risk GWAS published in the respective fields.
Despite these strengths, there are a number of caveats and potential limitations inherent in our study. First, all identified associations, including meQTL results and potentially causal disease links, are of a statistical nature and do not necessarily imply true molecular relationships. While we went to great lengths to limit false-positive or biased results throughout the various analysis arms of our work, none of the reported associations should be regarded as established until further validation from functional experiments. We note, however, that at least for buccal and blood tissue we observe very high (>>90%) replication rates for our top meQTL findings suggesting that the genetic effects on DNAm in these tissues are relatively stable and likely genuine. Second, by design, the meQTL maps provided are limited by the resolution of the utilized DNAm microarray. While this covers ~850K CpGs in functionally relevant regions of the human genome, this number still only represents ~1/30 th of the CpG sites that can be measured by whole-genome DNAm sequencing approaches. While this difference in resolution is substantial, the use of high-throughput microarrays is much more cost efficient allowing to assay much larger numbers of samples and, hence, to achieve greater statistical power. Third, performing and interpreting MR analyses for causal inferences has many drawbacks and potential limitations (extensively discussed in refs. 19 , 20 ). Again, we approached this topic with great caution by performing a large number of alternative and supporting analyses to derive the best possible inferences in the context of our study. However, we cannot exclude the possibility that some (or even all) of our MR-based conclusions may be biased or false. Only dedicated molecular experiments directly testing the hypotheses put forward in our study can help to distinguish true from false causal links. Finally, both the newly derived primary meQTL maps as well as the AD/PD risk GWAS are based on individuals of European descent. Therefore, no inferences can be made with respect to potential causal links highlighted here in individuals of different ethnicities.
In summary, our study represents a tour de force analysis resulting in the largest and most comprehensive catalogue of meQTL effects in human buccal and saliva tissues. Using these and additional blood-based meQTL data implicates a likely causal role of differential DNAm in AD and PD development in some genomic regions associated with disease risk by GWAS. Future work needs to independently replicate our results and elucidate the molecular mechanisms underlying these associations. | Author contributions Conception and design of study: O.O., Y.S., C.M.L., L.B. Sample recruitment and handling: D.B.F., G.C., S.D., A.M.F., U.L., A.P.L., C.S.P., J.M.T., V.M.V., K.B.W., I.D. Generation of molecular data: Y.S. V.D., S.S.S., T.W., M.W., A.F. Statistical analysis and interpretation: O.O., Y.S., J.H., L.D., M.S., C.M.L., L.B. First draft of manuscript: O.O., L.B. Critical review and approval of final manuscript: All authors.
DNA methylation (DNAm) is an epigenetic mark with essential roles in disease development and predisposition. Here, we created genome-wide maps of methylation quantitative trait loci (meQTL) in three peripheral tissues and used Mendelian randomization (MR) analyses to assess the potential causal relationships between DNAm and risk for two common neurodegenerative disorders, i.e. Alzheimer’s disease (AD) and Parkinson’s disease (PD). Genome-wide single nucleotide polymorphism (SNP; ~5.5M sites) and DNAm (~850K CpG sites) data were generated from whole blood (n=1,058), buccal (n=1,527) and saliva (n=837) specimens. We identified between 11 and 15 million genome-wide significant (p<10 −14 ) SNP-CpG associations in each tissue. Combining these meQTL GWAS results with recent AD/PD GWAS summary statistics by MR strongly suggests that the previously described associations between PSMC3 , PICALM , and TSPAN14 and AD may be founded on differential DNAm in or near these genes. In addition, there is strong, albeit less unequivocal, support for causal links between DNAm at PRDM7 in AD as well as at KANSL1/MAPT in AD and PD. Our study adds valuable insights on AD/PD pathogenesis by combining two high-resolution “omics” domains, and the meQTL data shared along with this publication will allow like-minded analyses in other diseases. | Data availability
All results of this study have been made available on Zenodo (URL: https://doi.org/10.5281/zenodo.10410506 ). Together with the AD/PD summary statistics from Bellenguez et al. (ref. 1 ) and Nalls et al. (ref. 2 ) these allow a full reproduction of all MR (incl. SMR) and colocalization analyses presented in this study. Sharing of individual-level genome-wide SNP genotyping and DNAm profiling data is restricted by study-specific access policies. Interested researchers can contact the steering committees of the respective studies to inquire about access: 1. BASE-II: [ [email protected] ], 2. BBHI: [ [email protected] ], 3. LCBC [ [email protected] ]. Requested data will be made available pending appropriate institutional data protection security measures and ethical approval of the requestor’s institution.
Supplementary Material | Acknowledgements
The authors are grateful to all participants for their time, commitment, and willingness to participate the in BASE-II, BBHI, and LCBC studies. Part of this research was funded by the EU Horizon 2020 Grant: ‘Healthy minds 0–100 years: Optimising the use of European brain imaging cohorts (“Lifebrain”; grant #732592 to A.M.F., K.B.W., U.L., D.B.F., and L.B.), the Cure Alzheimer’s Fund (“CIRCUITS-AD” to L.B.), and the Deutsche Forschungsgemeinschaft (DFG; grant #DE842/7–1 to I.D.). Additional support was provided by the German Federal Ministry of Education and Research (for the BASE-II/GendAge studies) under grant numbers #01UW0808; #16SV5536K, #16SV5537, #16SV5538, #16SV5837, #01GL1716A and #01GL1716B. Further support came from the Norwegian Research Council (to K.B.W., A.M.F.), the National Association for Public Health’s dementia research program, Norway (to A.M.F.), the European Research Council’s Starting (grant agreement 283634 to A.M.F. and 313440 to K.B.W.) and consolidator Grant Scheme (grant #771355 to K.B.W. and #725025 to A.M.F.), the “EU Joint Programme – Neurodegenerative Disease Research 2021”(JPND2021, “EPIC4ND project” to C.M.L.) and the DFG (LI 2654/2–1 to C.M.L.). C.M.L. was supported by a Heisenberg grant of the DFG (LI 2654/4–1). Dr. D. Bartrés-Faz was partly supported by the Barcelona Brain Health Initiative and Institute Guttmann and an ICREA Academia 2019 Research Award by the Catalan Government. Dr. A. Pascual-Leone was partly supported by the Barcelona Brain Health Initiative and Institute Guttmann, the National Institutes of Health (R01AG076708, R01AG059089, R03AG072233, and P01 AG031720), and the Bright Focus Foundation. We would like to thank Dr. Johann S. Hawe at the Institute of Computational Biology, Deutsches Forschungszentrum für Gesundheit und Umwelt, Helmholtz Zentrum München, Neuherberg, Germany, for his kind assistance with generating the chessboard plots. Lastly, we acknowledge the support of the OMICS high-performance compute cluster at University of Lübeck ( https://www.itsc.uni-luebeck.de/dienstleistungen/omics/omics-english.html ) where essentially all computational analyses of this study were performed. | CC BY-ND | no | 2024-01-16 23:49:21 | medRxiv. 2023 Dec 24;:2023.12.22.23300365 | oa_package/2a/74/PMC10775408.tar.gz |
|
PMC10775420 | 38196643 | Background
In regulated clinical trials, investigators must rely on research data acquired to (1) ensure the safety and efficacy of medical treatments (to protect research participants and the general population at large), and (2) ensure the reliability and reproducibility of study results. High quality data provide the foundation from which study conclusions may be drawn, 1 and, in contrast, poor data quality threatens the validity and generalizability of study findings. 1 , 2 In general, quality refers to “a product or service free of deficiencies” 1 , 3 – some experts also using terms like “fitness for use” 4 and “conformance to requirements.” 5 Within the context of clinical research and the practice of clinical data management, the Institute of Medicine defines data quality as data that “support the same conclusions as error free data.” 6 There are several attributes tied to quality, but, for this project, we focused primarily on data accuracy – data that accurately represent data points collected directly from study participants. 1
Authors in the clinical research arena lament the scarcity of published information regarding data quality. 6 – 18 While many authors point out that conclusions drawn from studies depend on data quality (and the underlying data collection and management methods), others consider the associated tasks clerical or even unnecessary. 19 – 22 This perception has resulted in minimal investigation and a small number of publications on the topic of data collection and management compared with other areas of clinical research and informatics methodology. With the current rapid influx of new technology into clinical research – starting with electronic data capture (EDC) and clinical trial management systems (CTMSs) shortly after the turn of the century, and followed by electronic patient reported outcomes (ePRO) systems, mobile health (mHealth), a myriad of digital health technologies (DHTs), and direct electronic health record-to-electronic case report form (EHR-to-eCRF) tools – understanding the quality of data from different available capture and processing methods has become even more important. 23 , 24 Many unresolved issues exist with respect to data quality in clinical research, including a thorough understanding of the accuracy and variability of current data processing methods 24 – 28 – a primary objective of this manuscript. A thorough review and synthesis of the relevant published literature is an initial step in providing guidance to investigators and clinical research teams. Accordingly, we aimed to address this gap through the systematic review and meta-analysis described in this manuscript.
Common options in data processing methods identified in the literature include: (1) chart review and abstraction versus direct electronic acquisition from electronic medical records (i.e., both types of medical record abstraction, or MRA); (2) use of vended or commercial data collection systems by local healthcare facilities (e.g., data entry and cleaning in local systems versus web-based data entry and cleaning in a centrally hosted system); (3) use of paper data collection forms with central processing versus local processing with data transfer to a central coordinating center; and (4) single- versus double-data entry (with or without programmed edit checks). Data cleaning methods also vary greatly, from use of reports to identify irregularities in the data, to on-screen checks (OSCs) during data entry (e.g., programmed edit checks), to post-entry batch data processing. We define the 4 major processing methods considered in this review (MRA, optical scanning, single-data entry, and double-data entry with or without programmed edit checks) in Table 1 .
Complicating comparisons of different data processing methods are the significant variability in quantitative methods for assessing data accuracy across clinical research and other secondary data uses. 1 , 29 , 30 Data accuracy has often been measured in terms of database error rates, although, registries commonly assess percent completeness as well. To standardize, the Society for Clinical Data Management’s (SCDM) Good Clinical Data Management Practices (GCDMP) document has defined the error rate as the “number of errors divided by the number of data values inspected.” 1 , 31
As described in the GCDMP, 1 there are significant differences in the way errors and values are inspected and counted across different clinical research studies, even across those conducted by the same institution. Based on these counting differences, the error rates obtained can differ by a factor of 2 or more. 1 , 30 In addition, differences in how error rates are reported (e.g., as raw counts, errors per record, errors per fields inspected, or errors per 10,000 fields), necessitate scaling and normalization of the values reported in the literature before comparisons can be made. Due to variability in counting, such comparisons may still not be meaningful. Here, we undertook a systematic review of the relevant literature identified through PubMed to characterize data collection and processing methods utilized in clinical studies and registries. Additionally, we conducted a meta-analysis to calculate and compare error rates across the various data processing methods described. | Methods
Literature Review
A PubMed search on the Medical Subject Heading (MeSH) terms “data quality” AND (registry OR “clinical research” OR “clinical trial”) through 2008 was conducted to identify relevant citations (see Additional File 1, Appendix A, Item A1 for the full PubMed Search Strategy and Table A2 for the PRISMA Checklist). Once an initial list of manuscripts was generated via PubMed, duplicates were excluded. The abstracts of the de-duplicated set of citations were screened for relevance against the eligibility criteria and those not meeting the criteria were also excluded. A search using PubMed related links and secondary and tertiary references was then conducted to identify additional manuscripts. The full-text of included manuscripts was reviewed against the eligibility criteria to generate the final set of manuscripts for inclusion in analysis (see Additional File 1, Appendix A, Reference List A3 and Table A4).
Criteria for Manuscript Inclusion
The goal of this search was to identify quantitative reports of data quality in clinical studies, and the search terms and logic were selected to optimize that. If we consider this review in terms of the commonly used Patient/Population, Intervention, Comparison, Outcomes (PICO) framework 32 for clinical searches, we can break down our search as follows. The population of interest was “clinical studies” – more specifically, “registries” or “clinical research” or “clinical trials” that relied on secondary use of healthcare data. The intervention of interest was “data processing methods” – in other words, activities that were carried out during the study to acquire, process, and/or manage the data of interest. As our research question was one of characterization, we did not look for papers reporting methodological comparisons. With respect to outcome, we required a quantitative reports of data quality such that we could calculate an error rate on the level of data values in error divided by the number of data values inspected.
Manuscripts were included in the analysis if: (1) they were published in peer reviewed journals indexed for retrieval or referenced by such and were obtainable; (2) they had a focus on secondary data use of healthcare data (e.g., clinical research, quality improvement, surveillance, research registries); (3) the database error rate was presented or resolvable (e.g., via number of errors identified and number of fields inspected, or contained sufficient information to calculate); (4) they described how the data were processed (e.g., MRA, optical scanning, single- or double-data entry); (5) they were written in the English language; and (6) the manuscript was the primary source for the error rate. Manuscripts not meeting 1 or more of these inclusion criteria were excluded.
Information Gathered from Manuscripts
Three types of data were collected from each manuscript: (1) information about how data were processed; (2) information about how data quality was measured; and (3) the number of errors and number of fields inspected. Concepts of interest of the data processing and quality measurement methods reported were noted as each manuscript was read. Prior to quantitative data analysis, factors identified from items (1) and (2) were developed in a qualitative, iterative manner during the review of the manuscripts. As such, concepts of interest, such OSCs versus batch data discrepancy identification were added to the data collection form as they were identified, and previously reviewed manuscripts were re-reviewed for presence of the newly identified concepts of interest. Natural groupings were organized into categories. These categories were later explored in the analysis to ascertain which (if any) of the factors might affect data quality.
The following parameters were also collected, but were considered supplemental: data cleaning method (i.e., batch data cleaning), location of data processing (central data center vs. local healthcare facility), gold standard used, and scope of method of comparison.
Quantitative data accuracy information including the number of errors identified and the number of fields inspected was abstracted from the manuscripts. Manuscripts were categorized by type of secondary data use, data processing method, and data accuracy assessment. Information on the number of errors identified and the number of fields inspected was collected for each manuscript. We abstracted the number of errors reported and the total number of data fields (values) inspected. The number of errors and number of fields inspected were used to calculate normalized error rates (number of errors per 10,000 fields) based on the recommendations in the GCDMP. 1 In cases where the authors presented only normalized error rates, such as errors per 10,000 fields, the normalized denominator was assumed for the total number of fields inspected. For example, if the normalized error rate presented was 100 per 10,000 fields, we took 100 to be the total number of errors (numerator) and 10,000 to be the total number of fields (denominator). Where error rates for more than 1 database were provided in a manuscript, each individual assessment was included in this analysis. Where error rates for multiple data processing steps were provided, we included each.
For consistency, 1 rater was used to abstract the error rate information from the manuscripts. A sample of the manuscripts included in the analysis, comprising 10% of the total (standard for the domain), was re-evaluated by the primary rater following the initial abstraction to assess reliability. For the sample, the time between the initial and intra-rater reliability review was at least 1 year. Intra-rater reliability, calculated as percent difference, was used to gauge reliability of the data. In addition, a second rater reviewed the same intra-rater reliability sample.
Statistical Analysis
Meta-analysis of single proportions 33 , 34 based on the Freeman-Tukey transformation method 35 and the generalized linear mixed model approach 36 of studies from the literature were used to derive an overall estimate of error rates across data processing methods for comparison. We also performed subgroup analyses where the data allowed. All statistical tests were performed at a two-sided significant level of 0.05, and all analyses were carried out using the R package ‘metafor’ and ‘meta’. 37 , 38 For each of the data processing methods, we used an inverse variance weighted meta-analytical method with Freeman-Tukey transformation 35 to calculate the pooled effect size and corresponding 95% confidence interval (CI). In the analysis, records with studentized residuals greater than an absolute value of 3 were considered outliers and subsequently removed. The degree of heterogeneity between studies were examined based on the Q-statistic and Higgins and Thompson’s I 2 statistic. The I 2 statistic can be interpreted approximately as ≤ 25%, indicating low heterogeneity; 25% to 75% indicating moderate heterogeneity; and > 75%, indicating considerable heterogeneity. 39 The Q-statistic is typically underpowered for detecting true heterogeneity when the number of studies is small; therefore, we pooled data using a random effects model. The inter-study variance was evaluated by computing tau-squared (τ 2 ), which provides the estimated standard deviation of the underlying effects across studies. Finally, to evaluate the consistency of our study, a sensitivity analysis was conducted using a leave-one-out model. 40 Also, a meta-regression with mixed-effect model with Freeman-Tukey transformation was implemented to compare the pooled effect among data processing methods. | Results
Manuscripts Included for Analysis
An initial search of the literature identified 350 citations. After excluding duplicates and performing the initial screen of abstracts, 54 manuscripts remained. A search using PubMed related links and secondary and tertiary references identified an additional 70 manuscripts, yielding 124 manuscripts for full-text review. Through the full-text review, we identified the final set of 93 manuscripts (see Additional File 1, Appendix A, Reference List A3 and Table A4), which were included in the pooled literature analysis ( Figure 1 ).
Four manuscripts 41 – 44 presented only normalized error rates as errors per 10,000 fields; for these, the denominator (10,000) was assumed for the total number of fields inspected. Each manuscript described a data quality assessment of 1 or more databases. Likewise, in some manuscripts, error rates were reported for more than 1 process step; for example, medical record-to-CRF or source-to-CRF, CRF-to-first entry, first entry-to-second entry, or CRF-to-clean file. A total of 22 manuscripts reported results for more than 1 processing step or database, 14 , 29 , 41 , 43 , 45 – 62 providing a total of 124 data points normalized as number of errors per 10,000 fields and demonstrating increasing dispersion over time of the health-related research literature with respect to data accuracy. The data processing methods, as reported in the literature, were not mutually exclusive; thus, some articles appear in more than 1 category (see Additional File 1, Appendix A, Table A5).
Meta-Analysis
During the meta-analysis, 9 records with absolute studentized residuals values greater than 3 were identified as outliers and, consequently, excluded from the analysis. Thus, 84 manuscripts remained, which were categorized by data processing method and were included in the final analysis. Database error rates ranged from 2 – 2,784 errors per 10,000 fields (having excluded outliers) across 4 data processing methods: MRA, optical scanning, single-data entry, and double-data entry. This 3 orders-of-magnitude range necessitated a logarithmic display. There appeared to be no pattern in the year-to-year reporting. The data processing method with the highest error rates was MRA, having a pooled error rate of 6.57% (95% CI: 5.51, 7.72) ( Table 2 ). The 3 other processing methods (optical scanning, single-data entry, and double-data entry) had much lower pooled error rates at 0.74% (0.21, 1.60), 0.29% (0.24, 0.35) and 0.14% (0.08, 0.20), respectively ( Table 2 ). Heterogeneity was observed in all 4 data processing methods (see Additional File 2, Appendix B, Figures B1–B4). The sensitivity analysis did not indicate the extreme influence of any particular study (see Additional File 3, Appendix C, Tables C1–C4).
Subgroup Analysis
In exploring subgroups of the 4 main data processing methods, there is insufficient information in the literature about the MRA methods employed to further investigate possible causes for the variability in a subgroup analysis. Similarly, there were too few optical methods data points to support a subgroup analysis. For single- and double-data entry, a review of the literature surfaced different variations on key entry, including single-data entry (1 person enters the data), single-data entry with on-screen data checks (1 person enters the data within a system employing programmatic, OSCs), and double-data entry (2 people independently enter data with a third, independent adjudicator to review and resolve discrepancies). Further variations on single-data entry found in the literature included, use of batch data cleaning, and the location of data processing. These results are provided in Additional File 4, Appendix D, Table D1. Due to the importance of this particular model, manuscripts reporting data accuracy from similar data processing configurations (e.g., central versus distributed data processing in the presence of OSC), were examined (see Additional File 4, Appendix D, Table D2). Sixty-eight studies (across 49 manuscripts) versus 49 studies (across 39 manuscripts) reported central versus distributed processing; while 7 studies (across 5 manuscripts) did not report the location of data processing (noted in Table A4, see Additional File 1, Appendix A).
The intra-rater reliability for number of errors, number of fields, and error rate were 85%, 97%, and 86%, respectively. In addition, a second rater reviewed the same intra-rater reliability sample, with comparable results. In light of the underlying variability in the data, the variability in error rate calculation methods currently in use, and the aims of this study, these were considered reasonable. In addition, they were comparable to those in a similar review paper of errors in EHRs. 63 | Discussion
This study calculated and compared error rates across the various data processing methods described in the literature. The results indicated that the accuracy associated with data processing methods varied widely. Error rates ranged from 2 to 2,784 errors per 10,000 fields within the 4 most common data processing methods, strengthening our understanding of the influence of data processing and cleaning methods on data accuracy.
Medical Record Abstraction
Ordered by the mean, MRA was associated with the highest error rate. Importantly, abstraction was also associated with significant variability. Notably, the error rates reported for MRA methods span 3 orders of magnitude, with error rates ranging from 70 to 2,784 errors per 10,000 fields. These results support claims that MRA, which remains the dominant method of data collection in retrospective and prospective research, is the most significant source of error across data processing methods. 13 , 64
Optical Scanning
Although optical scanning methods such as OCR and OMR have been touted as a faster, higher-quality or less resource-intensive substitute for manual data entry, 19 , 54 , 65 – 71 others have reported error rates with optical methods that were 3 times higher than manual keyboard data entry. 72 Based on the pooled literature, we found optical scanning error rates ranged from 2 to 358 errors per 10,000 fields. Optical methods were associated with a variability of 2 orders of magnitude in accuracy. Such variability may be influenced by: (1) the presence and type of data cleaning employed in processing the optical scans; (2) use of post-entry visual verification or pre-entry manual review; (3) training of form completers on handwriting; (4) differences in form compatibility with the software; (5) software configuration (e.g., recognition engine); and (6) variations in data quality assessment methods. In particular, based on the available error in human inspection in other disciplines ranging from 16.4% to 30.0%, 73 – 77 using manual visual verification is likely less effective than OSCs.
Single- vs. Double-Data Entry
Overall, single-entry error rates ranged from 4 to 650 errors per 10,000 fields, and double-entry error rates ranged from 4 to 33 errors per 10,000 fields. Great variability was observed between different sub-types of single-data entry, which provides a plausible explanation for the high level of variability observed in single-data entry as a whole. This is an important finding because large amounts of data are collected through single-data entry from research sites via web-based systems, including entry of abstracted data into web-based systems, clinicians entering data in EHRs, and data collected directly from patients via hand-held devices. Due to the problem of “alert fatigue,” however, OSCs may not be feasible in EHRs, where clinical alerts will often be a higher priority. The question of alert fatigue in these systems is an important topic for further research.
Measuring Data Accuracy
Claiming to have measured data accuracy (or error) is a statement implying that the measurer has compared the data to something, identified differences, and, in the case of a difference, was able to discern whether the data value from the assessed dataset was in error or not. In other words, a gold standard exists. In addition to aforementioned differences in counting errors and data values inspected, there was also variability in the literature with respect to the comparison made to measure data accuracy. In some cases, the comparator was the medical record; in other cases, it was an upstream recording of the data; in other cases, it was another dataset supposed to contain the same observations on the same individuals; and still in other cases, it was independent collection of the same information, such as a repeat interview or test. As evidenced by the literature and practice standard 78 the error rate has historically been the accuracy metric used. However, use of sensitivity and specificity have been recommended in draft regulatory guidance as the preferred measures of accuracy in the case of EHR and claims real-world data (RWD). 79 Sensitivity and specificity are preferred over overall accuracy or error rates because they are not dependent on prevalence. 80 These measures were not often used in the included manuscripts, probably due to a long history of using accuracy (the sum of true positives and true negatives divided by the total number of data values inspected) or error rate (the sum of false positives and false negatives divided by the same denominator) metrics. Where a gold standard is not available, errors cannot be determined in the case of a difference, and the difference or discrepancy rate is tallied instead. In this case, only measures of agreement such as inter-rater reliability and chance-adjusted agreement are appropriate. There are many such measures. 81 These measures, along with measures of agreement, were far more commonly in the included manuscripts than sensitivity and specificity. It is important to note that, while agreement may correlate with accuracy, agreement measures are not measures of data accuracy and, in many cases, may differ substantially from measures of accuracy.
As web-based EDC leads as the predominant method of future clinical research data collection, we anticipate heavier reliance on programmed edit checks to reduce error rates. Additionally, the role and process of programmed edit checks could serve as a model for addressing data quality checks of error rates within the more automated, standards-based processes of future data exchange, such as direct EHR-to-eCRF methods using the Health Level Seven (HL7 ® ) Fast Healthcare Interoperability Resources (FHIR ® ) standard. 82 – 87
Limitations
This study was a secondary, pooled analysis of database error rates in the published literature. Although it constitutes an important contribution in synthesizing the very fragmented historical literature, there are significant and inherent limitations. Very few of the included papers were controlled studies. Most of the included manuscripts merely stated the observed error rate and described data handling methods as part of reporting research results from a clinical study. With the exception of 8 included manuscripts (manuscripts 15, 36, 39, 42, 45, 71, 88, 92 from Appendix A, Table A4), the included studies were observational in nature (a “one shot” design) and lacked a comparator, i.e., “low quality evidence”. It is ironic that the level of rigor expected of evidence is not expected of the methods used to generate it. The risk of bias in included studies is significant. However, we do not claim cause and report only associations and provide multiple possible explanations for them, which may encompass domains of bias. The ROBINS-I (Risk Of Bias In Non-randomized Studies - of Interventions) tool enumerates 9 domains of bias: confounding, selection of participants into the study, classification of intervention, deviations from intended interventions, missing data, measurement of outcomes, and selection of the reported result. 88 For this research, we acknowledge that confounding could be present in any of the non-randomized included studies; for example, those reporting use of programmatic data quality checks may be more quality-conscious, or generally more careful. In general, reports tended to random sampling or census, obviating the second domain of bias.
A lack of standard terminology for data processing methods potentially affected this analysis through the high likelihood that relevant manuscripts were not identified or that descriptions in existing manuscripts were misinterpreted, i.e., bias from misclassification of the intervention. Though misclassification of the intervention (data processing method) was done systematically by the research team, the descriptions in the included studies themselves may be a source of bias.
As a secondary analysis, this work relies on data that were collected for other purposes. Although we used error and field counts reported in the literature, prior work has shown that even these have significant variability. 1 , 30 For example, some may count dates as discrepant if there is not an exact match, while others may allow a window of several days; field counts may exclude null fields, or include fields entered once and propagated to multiple places. 89 There likely is a bias toward counting rules that yield a larger denominator and smaller numerator. These represent a potential bias in measurement of outcome, and in handling missing data. Selection of the reported result or reporting bias is likely to be significant with reports tending toward those with lower error rates. Though the latter would have impacted use of our results as an acceptance level, it would not have impacted the comparisons between error rates and data processing methods because all included studies were equally subject to reporting bias. Taken together, the risks of bias in included studies would tend toward lower reported error rates and less difference between data processing methods, since the ideal in all cases is low error rates.
As with any literature review, there is the possibility that we may have missed relevant manuscripts in our search. Further, while the search, screening, and abstraction of information from the manuscripts was systematic, the search was only executed in PubMed. Other databases, such as EMBASE, were not searched; thus, manuscripts indexed in other databases were not included. Therefore, our results should only be considered representative of the biomedical literature searchable through PubMed.
Most of the manuscripts in our review were from academic organizations and government or foundation-funded endeavors that employ different data collection and management methodologies. Although over the time span of the literature we reviewed, those methods have tended to converge, our results may be less applicable to industry funded studies. Though our results are relevant to EDC data collection and cleaning processes, having to exclude the EDC (no manuscripts past the year 2008) literature from this review is a limitation. Authors did not consistently report the processes undertaken for collection and processing, nor did they include the error rate. For example, as reported in Nahm and colleagues in 2008, 23 some sites used paper worksheets to record data abstracted from medical records, while others charted source data directly in such worksheets, versus others that abstracted directly from the medical record into the EDC system without a paper intermediary. Because these aspects often could not be resolved in published manuscripts, the review was truncated to account for the onset of EDC adoption, with the latest included manuscript published in 2008.
Exclusion of the EDC literature would have impacted applicability today of the MRA error rate results most significantly. For abstracted data recorded directly into EDC, use of on-screen checks would likely reduce the error rate. The lack of data accuracy quantification with EDC processes reported by Zozus and colleagues (2020) was evident in 2 of the 12 reports 23 , 90 of data quality measured for EDC processes, reporting an error rate that would have met inclusion criteria for this study. A recent review summarized the EDC data quality literature and found only similarly absent or altogether lacking descriptions of data collection and processing methods accompanying reports of research results remains a serious omission. 24 , 91
Future Direction
As data (increasingly captured electronically) are used to support clinical research, the effects of data quality on decision-making need thorough exploration. Potential effects of system usability and data processing methods on data quality should also be characterized to guide data management and planning choices. In particular, the 2018 revision of Good Clinical Practices (GCP) calls for risk-based prioritization of study activities that focus resources on activities that impact human safety and research results. 92 Use of the word ensure rather than assure in the guidance strongly suggests that quality management systems be in place to prospectively design capable processes and to control error rates within acceptable limits. We found very few reports of prospective prediction of process capability or of implementation of process control for the data error rate. Quality management system (QMS) design and implementation with respect to data accuracy remains an area for further exploration. The variability and the magnitude of error rates reported in the literature should encourage quantitative evaluation of the impact of new technology and processes on data accuracy and subsequent decisions regarding whether the accuracy of the data is acceptable for the intended use. | Conclusion
Based on the pooled analysis of error rates from the published literature, we conclude that data processing and cleaning methods used in clinical trials research may explain a significant amount of the variability in data accuracy. For example, MRA error rates were associated with the highest and most variable compared to other data collection and processing methods, and that the observed error rates in the top quartile (904 to 2,784 errors per 10,000 fields) were high enough to potentially impact the results and interpretation of many clinical studies. In general, error rates reported in the literature were well within ranges that could necessitate increases in sample sizes from 20% or more in order to preserve statistical power for a given study design. 93 , 94 Data errors also been shown to change p values 95 and attenuate correlation coefficients to the null hypothesis; 96 – 98 in other words, a given clinical trial may fail to reject the null hypothesis because of data errors rather than because of a genuine lack of effect for the experimental therapy. 99 In the presence of large data error rates, a researcher must then choose to either (1) accept unquantifiable loss of statistical power and risk failure to reject the null hypothesis due to data error; or (2) measure the error rate and increase the sample size to maintain the original desired power. 89 , 94 , 98 The adverse impact of data errors has also been demonstrated in registries and performance measurements, 55 , 99 – 103 as has failure to report data. 104 Thus, the choice of data processing methods can likely impact process capability and, ultimately, the validity of trial results. Our findings suggest that reporting the results of a clinical study without specifying (1) the error rate, (2) the uncertainty in the error rate, and (3) the method used to measure the error rate limits the ability to interpret study findings.
While such results in aggregate are shocking, we do not present them to incite panic or cast doubt upon clinical research results. Other factors that are not assessable here, such as variables in which the errors occurred, and statistical methods used to take the measurement error into account, are necessary for such assessments. We applaud the authors of the reviewed papers for their rigor and forthrightness in assessing error; measurement is the first step in management. We hope that our analysis makes a strong and convincing argument for the measurement and publication of data accuracy in clinical research. | This work was conducted while Dr. Simon was at the ECHO Program at the NIH. He is now at the National Center for Health Statistics (NCHS), Centers for Disease Control and Prevention (CDC).
Authors’ Contributions
MYG and MNZ conceived and designed the study. ACW, AES, LAD, LWY, JL, JS, and SO contributed significantly to the conception and design of the project. MYG and MNZ contributed significantly to the acquisition of the data. MYG led and was responsible for the data management, review, and analysis. AES, JL, MNZ, SO, TW, and ZH contributed significantly to the analysis and interpretation of the data. MYG led the development and writing of the manuscript. All authors reviewed the manuscript and contributed to revisions. All authors reviewed and interpreted the results and read and approved the final version.
Background:
In clinical research, prevention of systematic and random errors of data collected is paramount to ensuring reproducibility of trial results and the safety and efficacy of the resulting interventions. Over the last 40 years, empirical assessments of data accuracy in clinical research have been reported in the literature. Although there have been reports of data error and discrepancy rates in clinical studies, there has been little systematic synthesis of these results. Further, although notable exceptions exist, little evidence exists regarding the relative accuracy of different data processing methods. We aim to address this gap by evaluating error rates for 4 data processing methods.
Methods:
A systematic review of the literature identified through PubMed was performed to identify studies that evaluated the quality of data obtained through data processing methods typically used in clinical trials: medical record abstraction (MRA), optical scanning, single-data entry, and double-data entry. Quantitative information on data accuracy was abstracted from the manuscripts and pooled. Meta-analysis of single proportions based on the Freeman-Tukey transformation method and the generalized linear mixed model approach were used to derive an overall estimate of error rates across data processing methods used in each study for comparison.
Results:
A total of 93 papers (published from 1978 to 2008) meeting our inclusion criteria were categorized according to their data processing methods. The accuracy associated with data processing methods varied widely, with error rates ranging from 2 errors per 10,000 fields to 2,784 errors per 10,000 fields. MRA was associated with both high and highly variable error rates, having a pooled error rate of 6.57% (95% CI: 5.51, 7.72). In comparison, the pooled error rates for optical scanning, single-data entry, and double-data entry methods were 0.74% (0.21, 1.60), 0.29% (0.24, 0.35) and 0.14% (0.08, 0.20), respectively.
Conclusions:
Data processing and cleaning methods may explain a significant amount of the variability in data accuracy. MRA error rates, for example, were high enough to impact decisions made using the data and could necessitate increases in sample sizes to preserve statistical power. Thus, the choice of data processing methods can likely impact process capability and, ultimately, the validity of trial results. | Acknowledgments
We thank Phyllis Nader, BSE for her assistance with this project.
Funding
Research reported in this publication was supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under award numbers UL1TR003107 and KL2TR003108, and by the IDeA States Pediatric Clinical Trials Network of the National Institutes of Health under award numbers U24OD024957, UG1OD024954, and UG1OD024955. The content is solely the responsibility of the authors and does not represent the official views of the NIH.
Availability of Data and Materials
The dataset(s) supporting the conclusions of this manuscript is(are) included within the manuscript (and its additional file(s)).
List of Terms & Abbreviations
Electronic Data Capture
Clinical Trial Management System
Electronic Patient Reported Outcomes
Mobile Health
Digital Health Technology
Electronic Health Record
Case Report Form
Electronic Case Report Form
Medical Record Abstraction: A data processing method that involves the review and abstraction of data from patient records, often referred to as chart review or chart abstraction . Traditional MRA is a manual process, which may or may not involve paper forms.
On-screen checks (i.e., programmed edit checks)
A data processing method used in clinical research that relies on software packages to “recognize characters from paper forms or faxed images, and these data are placed directly into the database.” 1
Optical Character Recognition; an example of optical scanning
Optical Mark Recognition; an example of optical scanning
Single-Data Entry: With respect to classification of data processing in the included manuscripts, single-data entry involves 1 person who enters data from a structured form into the study data capture system. SDE can be implemented with and without programmed edit checks (or OSCs). When on-screen checks are employed, a series of programmatic edit checks are actively running during data entry and will “fire” when a discrepancy is identified during data entry. The data entry person is then able to review and address discrepancies during data entry.
A data processing method during which electronic data quality checks are programmed into the study data collection system and are triggered by data entry, either in real-time as data is entered field-by-field or upon the form being saved, or in batch based on some pre-determined criteria. Programmed Edit Checks are also referred to as Discrepancy Checks , Edit Checks , On-Screen Checks , or Query Rules .
Double-Data Entry: Double-data entry involves 2 people (e.g., clinical research coordinator, data entry personnel) who independently enter data from a structured form to the study data capture system with a third, independent adjudicator to review and resolve any discrepancies. DDE can be implemented with and without programmed edit checks (or OSCs)*.
Society for Clinical Data Management
Good Clinical Data Management Practices
Medical Subject Heading
Patient/Population, Intervention, Comparison, Outcomes
Confidence Interval
Real-World Data
Health Level Seven
Fast Healthcare Interoperability Resources 87
Risk Of Bias In Non-randomized Studies - of Interventions
Good Clinical Practice
Quality Management System | CC BY | no | 2024-01-16 23:35:07 | Res Sq. 2023 Dec 21;:rs.3.rs-2386986 | oa_package/99/d1/PMC10775420.tar.gz |
|
PMC10775493 | 38196590 | Background
For patients with metastatic colorectal cancer (mCRC), surgical resection, when possible, has been associated with long-term progression-free survival (PFS) and overall survival (OS). Several large retrospective series have demonstrated 5-year OS rates of 40–70% in patients with isolated liver metastasis following liver metastasectomy [ 1 – 5 ]. Indeed, improvements in OS in patients with newly diagnosed mCRC over the last several decades have been attributed, in part, to an increase in hepatic resection [ 6 ]. For patients with limited extrahepatic disease, complete surgical resection has also been associated with prolonged PFS and OS, although the data are more limited [ 7 – 9 ]. While prospective evidence is lacking, these retrospective studies have demonstrated excellent long-term survival for patients with resectable mCRC and have defined the current standard of care (SOC).
In patients with multiorgan oligo-mCRC (e.g., low burden but liver inoperable disease or minimal extrahepatic and/ or extra-thoracic disease), it is less clear whether local ablative therapies, including thermal ablation and stereotactic ablative radiation therapy (SABR), can provide clinical benefit such as durable control of disease or improve survival. Limited prospective data exist on the benefit of thermal ablation to all areas of inoperable hepatic disease. The CLOCC phase II randomized trial (EORTC-40004) demonstrated that the addition of radiofrequency ablation to systemic therapy improved OS in patients with mCRC with inoperable liver disease (hazard ratio [HR] 0.58, 95% confidence interval [CI] 0.38–0.88, p = 0.01) [ 10 , 11 ]. Multiple mature retrospective series have also reported high rates of local control and favorable long-term survival following the use of thermal ablation for CRC liver metastases [ 12 – 15 ].
SABR appears to be a safe and effective way to treat multiple metastatic sites in the lung, abdomen/pelvis, bone, and spine [ 16 ]. The use of SABR for the treatment of CRC lung and liver metastases has demonstrated local control rates of 80–90% with minimal toxicity [ 17 , 18 ]. Additionally, there is emerging evidence that SABR to all sites of radiographic disease may improve PFS and OS [ 17 , 18 ]. SABR-COMET was a randomized, phase II trial that demonstrated improved OS with SABR compared to the standard of care arm (median OS 41 vs. 28 months, HR 0.57, 95% CI 0.30–1.10, p = 0.090), although having 4.5% grade 5 treatment-related adverse events in the SABR arm. While SABR-COMET was designed as a tumor-agnostic trial in 99 patients with up to 5 metastatic lesions, 27% (n = 9) of the patients in the control group and 14% (n = 9) in the SABR group had a CRC primary [ 19 , 20 ]. In addition to SABR-COMET, multiple other trials have shown the benefit of SABR in the oligometastatic setting including non-small cell lung cancer, prostate, and renal cell carcinoma [ 21 – 25 ]. These studies demonstrate that SABR is a safe and highly effective locoregional therapy that improves oncologic outcomes in a variety of disease settings, including in some settings where an oligometastatic paradigm has not been well established, in contrast to CRC. In fact, to date there are no completed randomized clinical trials investigating the benefit of SABR in patients with mCRC.
High quality data on the utilization of multimodality metastatic-directed therapy, including the combination of surgical resection, thermal ablation, and SABR, for patients with mCRC is limited despite increased use in clinical practice. A recent prospective Finnish interventional study (RAXO study) highlights the potential for multimodality directed therapy in patients with metastatic CRC [ 26 ]. The 5-year OS for patients treated with systemic therapy alone was 6% compared to 40% for patients treated with local ablative therapies and/ or surgical debulking (i.e., R2 resection) [ 26 ]. A 5-year survival of 40% with multimodality metastatic-directed therapy is quite notable, as for context the 5-year survival for patients who underwent metastasectomy (R0/R1 resection) was in comparison 66%. These data suggest that local metastatic-directed therapy with SABR, thermal ablation, and surgery may significantly enhance cancer control and enhance overall survival in patients with mCRC rather than continuing with systemic therapy alone.
ERASur was jointly developed by the National Cancer Institute (NCI)’s Alliance for Clinical Trials in Oncology and NRG Oncology to evaluate multimodality metastases-directed therapy in patients with mCRC. Despite the long history of treating oligometastatic CRC, questions remain regarding the benefit of extending local metastatic-directed therapies to patients with more extensive metastatic disease including in patients with extrahepatic disease. The ERASur trial seeks to fill this gap by testing if total ablative therapy (TAT) to all sites of metastatic disease improves survival using a pragmatic design that integrates the current spectrum of multimodality local therapies. If the addition of TAT to SOC systemic therapy improves OS, it will be established as a new standard of care for patients with limited mCRC. If TAT is associated with increased toxicity without improving OS, then future treatment paradigms can avoid unnecessary toxicity associated with TAT. | Methods/Design
Study Objectives
The primary objective of ERASur is to compare the outcome of using TAT in addition to SOC systemic therapy versus SOC systemic therapy alone in terms of OS, measured from the time of randomization, in patients with newly diagnosed limited mCRC. The secondary objectives include evaluating event-free survival, adverse events, and time to local recurrence for patients treated with TAT, defined as the time from the end of TAT to the date of first documented recurrence at any disease site treated with TAT.
Study Setting
ERASur is co-led by the Alliance for Clinical Trials in Oncology and NRG Oncology through the NCI National Clinical Trials Network (NCTN) and supported by the Southwest Oncology Group (SWOG) and the Eastern Cooperative Oncology Group-American College of Radiology Imaging Network (ECOG-ACRIN) Cancer Research Group. Patients will be accrued from member institutions of these NCTN cooperative groups which includes community and academic sites. NCI Central Institutional Review Board (CIRB) approved the study, with participating institutions relying on the CIRB. All patients must provide written informed consent.
Study Design
ERASur is a two-arm, multi-institutional, randomized phase III study investigating the effect of the addition of TAT to SOC systemic therapy in patients with limited mCRC. The study schema is illustrated in Figure 1 .
Patient Selection and Eligibility Criteria
Patients 18 years of age or older with histologically confirmed mCRC with 4 or fewer sites of metastatic disease are eligible. Metastatic sites must be radiographically evident, but pathologic confirmation is not required. Single sites include: each hemi-liver (right and left), each lobe of the lungs, each adrenal gland, lymph nodes amenable to a single resection or treatment in a single SABR field, and bone metastases amenable to treatment in a single SABR field. Patients with liver-only metastatic disease are not eligible, nor are patients whose tumors are known to have BRAF V600E mutations or microsatellite unstable. Metastatic lesions must be amenable to any combination of surgical resection, microwave ablation (MWA), and/or SABR. SABR is required to at least one site. Detailed eligibility criteria are shown in Table 1. Patients will have the option of pre-registering for the study within 16 weeks of starting first-line SOC systemic therapy with regimens including 5-fluorouracil, leucovorin, and oxaliplatin (mFOLFOX6), capecitabine and oxaliplatin (CAPOX), 5-fluorouracil, leucovorin, and irinotecan (FOLFIRI), and 5-fluorouracil, leucovorin, oxaliplatin, and irinotecan (mFOLFOXIRI) with or without anti-VEGF or EGFR therapies. For registration, a minimum of 16 weeks and a maximum of 26 weeks of first-line systemic therapy is required. Patients with overt disease progression after 16–26 weeks of first-line systemic therapy are not eligible for the study and if pre-registered will be removed. The study calendar is shown in Table 2.
Treatment Plan
Upon registration, which occurs after completing a minimum of 16 weeks or a maximum of 26 weeks of first-line systemic therapy, patients will be randomized to one of two treatment arms. Patients in arm 1 will undergo TAT followed by SOC chemotherapy per institutional practice. For patients in arm 1, the overall treatment plan will be discussed in a multidisciplinary setting, and the patient will be evaluated by physicians from all planned treatment modalities as early as possible for treatment planning. TAT will consist of surgical resection, MWA, and/or SABR to all sites of disease and must be completed within 90 days from randomization. At least one measurable site of metastatic disease needs to be present after completion of induction systemic therapy for treatment, and patients with a complete response to systemic therapy at time of randomization will be removed from the trial. At least one metastatic site must be treated with SABR. The remaining sites can be treated by either SABR with or without surgery and/or MWA. For treatment with SABR, the goal is to deliver a radiation dose that maximizes local control at the treatment site within the confines of anatomic and normal tissue constraints. However, sites must be credentialed for the treatment modality that they intend to use on all patients. All radiation therapy plans will be reviewed in real-time for quality assurance.
Resection of each planned metastatic lesion will be approached with the intent of an R0 resection. If surgical resection of a given metastasis is incomplete with gross or microscopic residual margins, the treatment team should strongly consider using an alternative ablative treatment modality such as MWA or SABR to any residual gross or microscopic disease. When addressing liver metastases, non-anatomic resection will be considered when feasible, and MWA will be considered to allow for a parenchymal-sparing approach for deep lesions less than 3 cm in size. For all patients who undergo surgery during protocol treatment, the preoperative imaging, operative note, surgical pathology report, and adverse events with 30 days of surgery will be reviewed by the study team for quality assurance.
MWA can be delivered either intra-operatively or using a percutaneous approach. Multiple electrodes and overlapping ablations will be permitted to ensure adequate coverage of the target. A minimum margin of mm will be required for lesions treated with MWA on this study. Initial assessment of the ablation zone will be verified immediately intra-procedurally using ultrasound, computed tomography (CT), or magnetic resonance imaging (MRI). If a margin of <5.0 mm is observed at initial assessment, additional ablation will be attempted to extend the ablation zone, expand the area of insufficient coverage and provide for at least 5.0 mm minimal margin around the target tumor. If at the first imaging timepoint the tumor is deemed to be incompletely covered, the tumor can undergo repeat treatment without penalization. As quality assurance, the study team will review pre-treatment imaging, the procedure notes, and adverse events within 30 days of treatment associated with MWA. Imaging from the first assessment timepoint at 14–18 weeks post-randomization will also be reviewed to ensure completion of planned ablation.
Lesions that are too small to be treated with any of the modalities included in TAT will be monitored and treated if they progress to a size that is amenable to treatment, and they will not be considered as Response Evaluation Criteria in Solid Tumors (RECIST) progression. Following completion of TAT, the treating healthcare team will consider re-starting systemic therapy within 2 weeks if no surgery is performed or within 4 weeks if surgery is included as part of TAT. Use of maintenance systemic therapy or systemic therapy breaks is permitted at the discretion of the treatment team. Patients randomized to arm 1 with the primary tumor intact will have the primary tumor removed within 6 months of randomization. Resection of the primary tumor may be performed at the same time as metastasectomy or may be staged per discretion of the healthcare team. For patients with primary rectal cancers, the use of pre-operative radiation or chemoradiation will be left to the discretion of the healthcare team.
Patients randomized to arm 2 will continue with systemic therapy with use of maintenance chemotherapy per institutional practice. Local metastatic-directed therapy will not be permitted except for palliation as per institutional standard practices. Palliative radiation therapy will be permitted for lesions causing symptoms that are not controlled by medical therapy with acceptable regimens including 30 Gy in 10 fractions, 24 Gy in 6 fractions, 20 Gy in 5 fractions, 8 Gy in 1 fraction, or an equivalent regimen. Systemic therapy breaks are permitted at any time at the discretion of the treatment team.
Assessment and Follow-up
Radiologic response will be evaluated using the RECIST version 1.1 guidelines [ 27 ]. A local recurrence will be defined differently based on the modality of treatment. For patients treated with SABR, a recurrence will be deemed local if located in or directly adjacent to the planning target volume. For a site treated using MWA, a recurrence will be deemed local if it is within 1 cm of the treatment site. For patients who undergo surgery, a recurrence will be considered local if it is located at the margin of resection.
Adverse events (AEs) will be graded using the Common Terminology Criteria for Adverse Events (CTCAE) Version 5.0. Solicited AEs will be collected at baseline prior to treatment until off treatment. Routine AEs will be collected starting after registration until the end of survival follow-up. The first treatment response assessment timepoint will be at 14–18 weeks post-randomization, and then every 3 months until disease progression or at the start of off-protocol anticancer therapy. Off-protocol anticancer therapies consists of any investigational agent, systemic therapy regimen(s) not included in the protocol; for patients randomized to arm 2, this includes any local metastatic-directed therapy other than therapy delivered with palliative intent. All visits will include a history and physical examination, laboratory studies, AE assessment, and imaging with CT of the chest along with CT or MRI of the abdomen and pelvis or, alternatively, a positron emission tomography/computed tomography (PET/CT). All patients, irrespective of whether continuing on study, or who are receiving off protocol therapy, will be followed for OS (except patients who withdraw consent).
Correlative Studies
Patients may elect to consent to collection of blood and archival formalin-fixed paraffin-embedded tissue for future genomic analyses. Three 10 mL blood samples will be collected at several time points including within 14 days of pre-registration for those who enroll prior to initiating systemic therapy, at randomization, at 4 months, 8 months, and 1 year after randomization, and at disease progression.
Statistics
Sample Size
Per study design, a total of 346 patients (173 per arm) are needed to evaluate the primary endpoint. An additional 18 patients (5% inflation) will be accrued to allow for withdrawal after randomization and major violations. Thus, the total planned target accrual will be 364 patients. Approximately 405 patients will be pre-registered to reach this target accrual, allowing for a 10% drop out during the initial 16 to 26 weeks of SOC systemic therapy due to complete response status, progressive disease, unacceptable toxicity, patient withdrawing consent, treating physician’s decision, etc. With an anticipated accrual of 6.5 patients per month, we estimate the accrual period to be 4.7 years.
Power Analysis
Eligible patients will be stratified by the number of metastatic organ sites (1–2 vs. 3–4), timing of metastatic disease diagnosis (synchronous metastatic disease vs. metachronous metastatic disease diagnosed ≥12 months following completion of definitive treatment for initial diagnosis), and presence of at least one metastatic site outside the liver and lungs (yes vs. no). Participants will be assigned to one of two treatment arms in a 1:1 ratio, using a dynamic allocation algorithm [ 28 ]. This study will utilize a group sequential design with two interim analyses for futility after observing 25% (52 events) and 50% (104 events) of events, adopting the Rho family (Rho=1.5) beta spending function for controlling the type II error rate. Based on historical data, the median OS is assumed to be 26 months (following 16–26 weeks of initial SOC systemic therapy) for newly diagnosed mCRC patients treated with SOC systemic therapy. We assume an accrual rate of 6.5 patients per month, minimum follow-up on all patients of 60 months, exponential survival, and a one-sided log-rank test for superiority conducted at a one-sided significance level of 0.05. Based on these assumptions, a total number of 208 events will provide 80% power to detect an HR of 0.7 at a one-sided significance level of 0.05 requiring randomization of at least 346 evaluable patients (173 per arm). | Discussion
ERASur is a multicenter randomized phase III clinical trial currently accruing through the U.S. NCI NCTN, which is designed to evaluate the benefit of adding metastatic-directed therapy to SOC systemic therapy in patients with limited mCRC. As imaging and treatment technology and techniques improve, the ability to detect and safely treat metastatic disease with local therapy has improved. However, carefully designed prospective randomized trials are necessitated to fully inform the value of this strategy with regard to efficacy, safety, costs and other consideration. The results of ERASur will help to define the clinical utility of TAT in patients with limited mCRC with extrahepatic disease. The trial activated in January 2023 through the NCI Cancer Trials Support Unit and is currently enrolling.
The conceptualization and design of ERASur was co-led by the Alliance for Clinical Trials in Oncology and NRG Oncology. The study incorporated input from a multidisciplinary team, comprised of experts in surgical, medical, radiation, interventional radiology, imaging and other disciplines, including patient advocacy, which was particularly important given the varied therapeutic modalities under investigation. The final study design was forged with critical input from the NCI Colon Task Force, alongside the guidance of the NCTN including the NCI Gastrointestinal steering committee and Cancer Therapy Evaluation Program. Patient advocates provided input early in the trial design in collaboration with COLONTOWN, a large online community of patients who have had or currently have CRC. Patient engagement and input was sought through multiple online polls run independently by COLONTOWN leadership in order to assist with key study design questions and to gauge patient interest for the trial within the COLONTOWN community [ 29 ]. The COLONTOWN community showed exceptionally strong support for ERASur with 90% of patients (N = 127) stating an interest in participating on the trial if they were eligible.
While inception of the trial required multidisciplinary input, successful completion of the trial will also require a concerted effort of the treatment teams at participating sites. For patients randomized to the TAT experimental arm, the selection and sequencing of metastatic-directed therapy will largely be left to the individual healthcare teams within the protocol’s guidance, including use of SABR for at least one site and surgery reserved for lung, liver, and portocaval lymph nodes. This design is by intent, both to maintain the pragmatic nature of this study and to reflect ‘real world’ clinical practice. Rigorous quality assurance mechanisms are in place in addition to two interim analyses to ensure that patients are treated safely with sufficient thresholds for stopping the study for futility.
ERASur is a study that could only be designed and conducted in a cooperative group setting with federal support. Specifically, the primary study hypothesis does not involve an investigational therapeutic or a new device, lending itself relatively unsuitable to pharmaceutical or device manufacture sponsorship. This trial has the potential to significantly impact practice with a positive result providing the much-needed high level evidence to support the practice of integration of TAT in mCRC, and a negative or neutral outcome of this strategy suggesting that SOC systemic therapy is a preferred approach for most patients. | Authors’ contributions: All authors contributed to both the original protocol and the manuscript. All have read and approved the final manuscript.
Background:
For patients with liver-confined metastatic colorectal cancer (mCRC), local therapy of isolated metastases has been associated with long-term progression-free and overall survival (OS). However, for patients with more advanced mCRC, including those with extrahepatic disease, the efficacy of local therapy is less clear although increasingly being used in clinical practice. Prospective studies to clarify the role of metastatic-directed therapies in patients with mCRC are needed.
Methods:
The Evaluating Radiation, Ablation, and Surgery (ERASur) A022101/NRG-GI009 trial is a randomized, National Cancer Institute-sponsored phase III study evaluating if the addition of metastatic-directed therapy to standard of care systemic therapy improves OS in patients with newly diagnosed limited mCRC. Eligible patients require a pathologic diagnosis of CRC, have BRAF wild-type and microsatellite stable disease, and have 4 or fewer sites of metastatic disease identified on baseline imaging. Liver-only metastatic disease is not permitted. All metastatic lesions must be amenable to total ablative therapy (TAT), which includes surgical resection, microwave ablation, and/or stereotactic ablative body radiotherapy (SABR) with SABR required for at least one lesion. Patients without overt disease progression after 16–26 weeks of first-line systemic therapy will be randomized 1:1 to continuation of systemic therapy with or without TAT. The trial activated through the Cancer Trials Support Unit on January 10, 2023. The primary endpoint is OS. Secondary endpoints include event-free survival, adverse events profile, and time to local recurrence with exploratory biomarker analyses. This study requires a total of 346 evaluable patients to provide 80% power with a one-sided alpha of 0.05 to detect an improvement in OS from a median of 26 months in the control arm to 37 months in the experimental arm with a hazard ratio of 0.7. The trial uses a group sequential design with two interim analyses for futility.
Discussion:
The ERASur trial employs a pragmatic interventional design to test the efficacy and safety of adding multimodality TAT to standard of care systemic therapy in patients with limited mCRC. | Acknowledgements:
We acknowledge Jennifer Huber for editorial assistance.
Funding:
Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Numbers U10CA180821 and U10CA180882 (to the Alliance for Clinical Trials in Oncology), U10CA180820 (ECOG-ACRIN); U10CA180868 (NRG); U10CA180888 (SWOG). No funding agency was involved in the design of the study, or the collection, analysis, and interpretation of data. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Availability of data and materials:
The datasets generated and analyzed during the current study will be available in Medidata/RAVE repository and will be available on reasonable request and pursuant to Alliance for Clinical Trials in Oncology guidelines. Please contact Dr. Hitchcock ( [email protected] ), Dr. Miller ( [email protected] ), or Dr. Romesser ( [email protected] ) to request the data from this study.
Abbreviations
adverse events
capecitabine and oxaliplatin
Central Institutional Review Board
colorectal cancer
computed tomography
Common Terminology Criteria for Adverse Events
Eastern Cooperative Oncology Group-American College of Radiology Imaging Network
Evaluating Radiation, Ablation, and Surgery
5-fluorouracil, leucovorin, and irinotecan
hepatic artery infusion pump
metastatic colorectal cancer
5-fluorouracil, leucovorin, and oxaliplatin
fluorouracil, leucovorin, oxaliplatin, and irinotecan
magnetic resonance imaging
microsatellite instable
microwave ablation
National Cancer Institute
National Clinical Trials Network
overall survival
positron emission tomography/computed tomography
progression-free survival
Response Evaluation Criteria in Solid Tumors
stereotactic ablative body radiotherapy
standard of care
Southwest Oncology Group
total ablative therapy
upper limit of normal | CC BY | no | 2024-01-16 23:35:07 | Res Sq. 2023 Dec 23;:rs.3.rs-3773522 | oa_package/6d/59/PMC10775493.tar.gz |
|||
PMC10775679 | 38196638 | Introduction
Rare diseases, also known as orphan diseases, are defined by the European Union as those affecting fewer than one in 2,000 people, and in the United States as those affecting fewer than 200,000 people nationwide 1 , 2 . Rare diseases are collectively very common, and it is estimated that as many as 1 in 16 people (6.2%) suffer from one or more rare diseases 3 . This makes them a serious public health concern, as rare disease patients are far less likely to receive accurate diagnoses or, once diagnosed, to have access to effective treatments for their conditions 4 - 6 . This is due to the difficulty of studying rare diseases, a scarcity of clinical expertise and diagnostic methods, as well as the unprofitability of developing drugs targeting them. In fact, many of these diseases are so understudied and underdiagnosed that we do not know with any certainty what their true prevalence is, and how many undiagnosed patients there may be 3 . One of the primary reasons for all these problems is the difficulty of finding large enough populations of patients to conduct well-powered studies on these diseases, either in the context of basic or translational research or in the context of drug trials. This is a pressing problem, and tools such as MatchMaker Exchange, which help researchers match similar cases to increase sample size for studies of rare diseases, are widely used 7 - 9 . Further development of these tools is also an active area of research, including expanding them to include comparisons of phenotypic features mined from electronic health records (EHR) or imaging data 9 - 12 . These tools are vital for rare disease research because undiagnosed rare disease patients masquerade as healthy controls, making them invisible and inaccessible to researchers unless they can be revealed. There is an urgent need for new approaches to reveal people suffering from hidden rare diseases in research and drug trial cohorts, and in clinical practice.
In this study, we present such an approach, using a deep learning transformer model trained on EHR data. Artificial intelligence (AI) language models based on the transformer architecture, such as BERT (“Bidirectional Encoder Representations from Transformers” and GPT (“Generative Pretrained Transformers”), have proved very successful at learning the relationships between concepts in natural languages 13 , 14 . Transformer models have also been successfully applied to problems in biology that are not directly related to language processing in methods such as AlphaFold-2, AlphaMissense, DeepMAPS, and Enformer 15 - 18 . One of the strength of the transformer architecture is that, with appropriate tokenization and training schemes, transformers can correctly model concepts that are extremely rare, and some transformer-based models have been shown to define a new word after seeing it only a small number of times 19 - 22 . We have designed a modified transformer architecture to model phenotypic concepts based on phenotypes derived from structured diagnosis codes in electronic health records (EHR), along with a modified training procedure designed to maximize power to screen for missing rare diagnoses. The resulting model, RarePT (Rare-Phenotype Prediction Transformer), was trained on EHR data from 436,407 individuals from the UK Biobank and validated on an independent cohort from 3,333,560 individuals from the Mount Sinai Health System in New York City, USA ( Table 1 ). RarePT shows remarkable power to recapitulate rare diagnoses, which is robust across different racial and ethnic groups, different hospitals with different coding practices, and even different countries with different health care standards and coding vocabularies. It also detects UK Biobank participants with undiagnosed rare disease, enabling empirical measurement of the true prevalence of undiagnosed cases for rare diseases. | Methods
Data collection and preprocessing
The primary training data were derived from the UK Biobank 73 . For each participant, we retrieved age at recruitment (field 21022), sex (field 31), and a list of all ICD-10 diagnosis codes recorded across all inpatient hospital records (field 41270). We also retrieved self-reported ethnicity (field 21000), body mass index (BMI, field 21001), blood pressure (fields 4079-4080), LDL cholesterol (field 30780), total cholesterol (field 30690), blood glucose (field 30740), and glycated hemoglobin (HbA 1C , field 30750) as baseline cohort characteristics (shown in Table 1 ), and 23 diagnostic tests and biomarkers assayed as part of the recruitment process ( Supplementary Table S6 ), though these were not included in the data used to train the model. For this study, we represented each ICD-10 code as a binary indicator that could either be present or absent, ignoring the dates associated with each code. We mapped ICD-10 codes to version 1.2 of the Phecode classification, using the published mapping 24 . This resulted in a dataset of 436,407 participants, 239,711 female and 196,696 male, including 1,558 of the 1,570 phecodes with defined ICD-10 code mappings. Demographic and clinical characteristics of this cohort are shown in Table 1 .
We constructed a balanced dataset of training examples consisting of 100 cases and 100 controls of each phecode, excluding phecodes with fewer than 100 cases or fewer than 100 controls. This excluded 273 phecodes, leaving 1,297 unique query phecodes. These 273 phecodes remained in the training data as diagnoses but were never used as the query in any training examples. Cases were defined as participants whose phecode diagnoses contained the query phecode. Controls were defined as participants whose phecode diagnoses did not include the query phecode or any phecodes listed as exclusions for the query phecode. For sex-specific phecodes, controls were also required to be the correct sex for the query phecode. For cases, the query phecode and all exclusion phecodes were removed from the diagnosis list as part of preprocessing ( Figure 1a ). Phecode diagnoses were encoded using a many-hot encoding, with phecodes ordered linearly by their numerical code; query phecodes were encoded using a one-hot encoding with the same ordering of phecodes.
Training examples were randomly split into five equal subsets for five-fold cross validation. Since the same individual can appear multiple times with different query phecodes, we required each cross-validation subset to contain a unique set of individuals, so that each individual can only appear in one subset. This prevents the model from improving performance by recognizing specific individuals from the training set and recapitulating the known diagnoses of those individuals. Cross-validation subsets were also required to contain similar numbers of cases and controls.
Our independent validation data were derived from the Mount Sinai Data Warehouse (MSDW), a database of clinical and operational data derived from the electronic health records (EHR) systems of the Mount Sinai Health System in New York City. These data are anonymized, standardized, and preprocessed for use in clinical and translational research. For each patient in this database, we retrieved age as of 2009 (the median date associated with the “age of recruitment” field in the UK Biobank), physician-reported sex, and a list of all ICD-10-CM diagnosis codes recorded in the EHR. We mapped these to phecodes using the published mapping for ICD-10-CM diagnosis codes, which is slightly different from the mapping for the ICD-10 codes used in the UK Biobank 24 . Patient records were processed into examples suitable for the model in the same way as described above. The final size of this cohort was 3,333,560. Demographic and clinical characteristics of this cohort are shown in Table 1 .
Study protocols were approved by the Institutional Review Board at the Icahn School of Medicine at Mount Sinai (New York City, NY, USA; GCO#07–0529; STUDY-11–01139) and all participants provided informed consent. Use of data from the UK Biobank was approved with the UK Biobank Resource under application number 16218.
Model architecture, tuning, and training
The model was implemented in Python using the Keras package 74 . Figure 1a shows a schematic of the model architecture. The input diagnosed phecodes feed into a stack of modified transformer decoder modules, based on the TransformerDecoder layer implemented in the KerasNLP package 75 . The standard TransformerDecoder layer was modified to remove the causal attention mask that prevents the self-attention layer from paying attention to positions that are later in the sequence than the token currently being considered. Since the phecode encoding is ordered by phenotype category and we are ignoring temporal sequencing, this causal mask would be inappropriate. Each decoder layer also contains a cross-attention layer which takes input from the encoded query phecode, allowing the model to learn attention relationships between the diagnosis phecodes and the query phecode. To adjust for demographic variables, the demographic variables are passed through a single densely connected layer to transform them into the same dimension as the phecodes, and then added to the output of the transformer layers and normalized. Finally, the prediction is given by a dot product between the input query phecodes and the demographics-adjusted output of the transformer layer, and transformed into a probability score using a softmax function. The Python code implementing this architecture will be made publicly available along with this publication.
Hyperparameter tuning was performed using the Hyperband algorithm, as implemented in the KerasTune package 25 , 76 . The list of hyperparameters and their final tuned values are found in Supplementary Table S13 . We randomly sampled 80% of the training data to use for training and used the remaining 20% as the validation set for the hyperband algorithm, choosing the hyperparameters that minimized the training loss function on the validation set. For five-fold cross-validation runs, this 20% held out validation sample was contained within the training set and did not overlap the cross-validation test set. The hyperband algorithm was run for up to 18 epochs per model, stopping if validation loss failed to improve in 5 consecutive epochs. The final selected model was then trained for up to 54 epochs, and the best epoch was selected based on validation loss. Finally, after tuning was complete, the held-out validation set was added back into the training set and selected model was retrained for the selected number of epochs on the complete training set. Again, the Python code implementing this training procedure will be made publicly available along with this publication. We repeated this hyperparameter tuning for each of the five training subsets produced by cross-validation as well as on the full training set, and all six runs selected identical values for all hyperparameters. Models were trained using NVIDIA A100 GPUs on the Mount Sinai local high-performance computing cluster, Minerva. Each cross-validation run took approximately 5 hours of GPU time to tune and train, and the full model took approximately 6 hours, for a total of approximately 31 hours.
Prediction of rare phenotypes
We calculated the prevalence of each phecode in the UK Biobank by dividing the number of cases by the total number of participants. We identified 155 rare phecodes with prevalence less than 1 in 2,000, or 0.05%, corresponding to the European Union definition of a rare disease. We calculated model predictions from each of the six trained models (five cross-validation models and one full-dataset model) using each of these 155 phecodes as a query for 436,407 participants in the UK Biobank, excluding from each model the participants contained in its own training set. We additionally produced predictions from the full trained model for 3,333,560 patients in the MSDW cohort for each of the 151 rare phecodes that were also present in that cohort. Generating model predictions for the UK Biobank cohort took approximately 3 hours of GPU time for each of the six models, for a total of approximately 18 hours; generating model predictions for the MSDW cohort took approximately 45 hours of GPU time. The MSDW cohort took substantially longer because the dataset was too large for the model to fit into memory and had to be broken up into batches.
We quantified the performance of our models by diagnostic odds ratios and positive predictive values. We arbitrarily chose a threshold probability score of 0.95 to represent a relatively high-confidence case prediction, and treated predictions with probability score > 0.95 as predicted cases and ≤ 0.95 as predicted controls. We quantified performance using odds ratio (OR) and positive predictive value (PPV), as these are measures relevant to diagnostic screening tests 29 . We calculated OR as where is the number of true positive predictions (cases correctly predicted as cases), is the number of false positive predictions (controls incorrectly predicted as cases), is the number of true negative predictions (controls correctly predicted as controls), and is the number of false negative cases. In other words, this is the ratio between the odds of a positive prediction being a case and the odds of a negative prediction being a case. We added a correction of 0.5 to each count to correct for zeros 77 . We calculated PPV as
In other words, this is the probability that a positive prediction is a case. In all instances, we excluded controls with an exclusion phecode, controls whose sex did not match the phecode, and all individuals who were included in the training set of the cross-validated models.
Mortality and DALY analyses
We estimated disability-adjusted life years (DALY) and its components years lost to disability (YLD) and years of life lost (YLL) for UKBB individuals using per-disease estimates from the 2019 Global Burden of Disease (GBD) study. We used the 80 non-overlapping non-communicable diseases that account for the majority of a population’s DALY as described by Jukarainen et al. 32 , 33 GBD definitions of specific diseases and conditions were used to label individuals affected by these diseases in the UK Biobank. Estimates of disease burden in the UK from GBD were then applied to individuals with each disease to produce estimated values of DALY, YLD, and YLL. 32 These estimated values were tested against RarePT predictions by linear regression. We retrieved a single prediction score by using the model trained on the full dataset for individuals who were not included in the training set, and the appropriate cross-validation model for individuals who were included in the training set (that is, the cross-validation model whose training set did not include that individual). We turned this score into a binary prediction using an arbitrary threshold of 0.95. We then performed linear regression testing the ability of this score (independent variable) to predict DALY, YLD, or YLL (dependent variable), controlling for age, sex, and self-reported ethnicity. We repeated this analysis both including all UK Biobank participants and excluding known diagnosed cases and exclusions for each phecode.
We additionally retrieved date of death and date of recruitment from the UK Biobank (fields 40000 and 53) and performed an analysis of mortality using Cox proportional hazard regression. We treated time from recruitment to death as a right-censored dependent variable, again using the binary RarePT prediction as an independent variable along with age at recruitment, sex, and self-reported ethnicity. As with DALY, we repeated this analysis both including all UK Biobank participants and excluding known diagnosed cases and exclusions for each phecode. All regressions were performed in Python using the statsmodels package 78 .
Biomarker and diagnostic test analysis
We collected biomarkers and diagnostic tests associated with phecodes using the SNOMED-CT database of clinical terms 34 , 35 . We identified all ICD-10 codes that mapped to any of our 155 rare phecodes and also mapped to a SNOMED-CT term with an “interprets” relationship (concept 363714003). The “interprets” relationship indicates that the concept represented by the diagnosis code has an underlying evaluation that is “intrinsic to the meaning of” that concept 79 . Examples of this kind of relationship include the relationship between obesity and measured body weight, hypercholesterolemia and total serum cholesterol, or thrombocytopenia and platelet count. In most cases, SNOMED-CT also identifies the direction of the relationship using the “has interpretation” relationship (concept 363713009). For example, hypercholesterolemia is interpreted as total serum cholesterol above reference range, while thrombocytopenia is interpreted as platelet count below reference range. For each concept that was the target of an “interprets” relationship, we manually searched for a corresponding measurement available in the UK Biobank and a corresponding reference range. The result was a list of 75 relationships between rare phecodes and UK Biobank data fields, encompassing 32 rare phecodes and 23 data fields, each with an expected direction of relationship (above, below, or outside) and sex-specific reference ranges ( Supplementary Table S6 ).
As with the previously described regression analyses, we retrieved a single prediction score for each of these 32 rare phecodes by using the model trained on the full dataset for individuals who were not included in the training set, and the appropriate cross-validation model for individuals who were included in the training set (that is, the cross-validation model whose training set did not include that individual). We turned this score into a binary prediction using the same 0.95 threshold. We then performed two regression analyses testing the ability of this binary prediction (independent variable) to predict the corresponding data field (dependent variable), controlling for age, sex, and self-reported ethnicity. In the first analysis, we used the expected direction of relationship and the reference range to construct a binary variable indicating whether each individual had an abnormal result in the direction expected. We performed logistic regression using this binary variable as the dependent variable. We repeated this analysis both including all participants and excluding individuals labelled as cases or exclusions for each phecode. This regression tests whether the model can predict individuals with abnormal test results consistent with a diagnosis even in individuals labelled as controls. In the second analysis, we normalized the values of each biomarker within the reference range so that the sample population for each sex had mean 0 and variance 1 after excluding all individuals with values outside the reference range. Finally, we aligned the values so that the expected direction of association was always positive, by multiplying them by −1 for “below reference range” relationships and taking their absolute value for “outside reference range” relationships. We performed standard linear regression using this normalized and aligned value as the dependent variable. We repeated this analysis both including all participants and excluding both individuals labelled as cases or exclusions and individuals with abnormal test results. This regression tests whether the model can predict individuals with elevated or reduced results even if they are still within the normal range. Regressions were performed in Python using the statsmodels package 78 .
Finally, we identified a set of “confirmed controls” for each phecode, defined as individuals labelled as controls who also had all associated results within the reference range and within one standard deviation of the population mean for their sex. We consider these individuals very unlikely to be undiagnosed cases incorrectly labelled as controls. We used the performance of our model on these confirmed controls to estimate the false positive rate of our model for each of the 32 phecodes with available relationships to test results. We then used this false positive rate to estimate the number of undiagnosed cases using the following relationship:
Where represents the number of undiagnosed cases; represents the total number of individuals with unknown case-control status, excluding controls confirmed by laboratory tests but including undiagnosed cases; represents the total number of unknowns predicted as cases by the model, again excluding controls confirmed by laboratory tests but including undiagnosed cases; represents the false positive rate of the model as estimated from confirmed controls; and represents the true positive rate of the model as estimated from diagnosed cases. See Supplementary Note 1 for derivation and discussion of this relationship.
The Python code implementing all these analyses will be made publicly available along with this publication.
Software Package and Workflow
For portability and reproducibility, model training and analysis code is formatted as a Snakemake workflow. 80 This allows easy retraining of the RarePT model and reproduction of the analyses reported here on any appropriately-formatted individual-level dataset. After creating an appropriately named and formatted input data file and setting up Snakemake for their execution environment, users can train a new model with a single command:
The number of cases and controls sampled, the number of cross-validation folds used for testing, and the random seed can be changed by changing the appropriate values in the targeted filename. Likewise,
trains a model and uses cross-validation to generate model predictions for all rare phecodes, and
uses the UK Biobank trained model to generate model predictions for all rare phecodes in a user dataset. Snakemake can be configured for many different high-performance computing and cloud computing environments and, when properly configured, automatically manages resource requirements and package dependencies.
The Snakemake workflow will be published with acceptance of this manuscript in a peer-reviewed journal. Prior to formal publication, it is available on request from the authors. | Results
Model training and cross-validation
We implemented a transformer model with a self-attention mechanism similar to AI language models such as BERT and GPT, along with a “masked diagnosis modeling” training objective by analogy to the “masked language modeling” objective used by some of these language models 19 . In this approach, training examples consist of complete sequences with a single token removed, and the model is trained to reconstruct the missing token. In the natural language processing case, this is a sequence of words with a single word removed; in our case, it is a participant record with a single diagnosis removed ( Figure 1a ). The model learns the meanings of tokens based on the context they appear in, resulting in embeddings that cluster tokens that commonly appear together and tokens that appear in similar context. Models trained with this objective are known to learn informative embeddings even for very rare tokens in many cases 19 - 22 . We made use of this feature to train a model to predict rare diagnoses, a critical need due to underdiagnosis and understudying of rare diseases.
An additional advantage of the masked diagnosis modeling training objective for rare tokens is that it allows us to weight the importance of tokens to the training objective independent of their prevalence in the training corpus. This is because each training example specifies which token the model must predict correctly to be scored as successful, and the model is not necessarily required to predict every token in each example. The importance of each token to the training objective is determined by how many examples have it as the masked token. In order to prevent very common diagnoses from dominating the learned embeddings, we limited training examples to a fixed number of cases and controls for each diagnosis. While we used 100 cases and controls for each diagnosis, in principle this is a tunable parameter of the training process. Lower values allow rarer diagnoses to be included, while higher values increase the amount of training data available.
In this study, we express diagnoses as phecodes 23 . We determined phecodes from ICD-10 codes for 436,407 participants in the UK Biobank based on a standard mapping 24 . We then filtered out all phecodes with fewer than 100 cases and controls and constructed a training dataset consisting of 100 randomly selected cases and 100 randomly selected controls for each phecode. The resulting training set consisted of 259,400 training examples representing 1,297 query phecodes and 111,331 unique participants. For each training example, input data included the following features:
The identity of the query phecode
All other phecodes for which the participant is considered a case
Age at recruitment
Sex reported from recruitment
These training examples were split into 5 subsamples for cross-validation, stratified so that each participant appeared in only one split and so that each split contained a similar number of cases and controls. Neural network architecture and other training hyperparameters were tuned on the training data for each split using the Hyperband algorithm 25 , and then the tuned model was trained on the same data and tested on the held-out test data; see Methods for details of model tuning and training parameters. The final tuned architecture is shown in in Figure 1b ; details of training performance can be found in Supplementary Figure S1 and Supplementary Table S1 . In general, the models performed well on the test data and showed only minor loss of performance between training and test data.
RarePT predicts rare diagnoses in the UK Biobank
To test the ability of the trained model to predict rare diagnoses, we first selected all phecodes appearing in fewer than 1 in 2,000 UK Biobank participants, corresponding to the definition of rare diseases used by the European Union 2 . There were 155 rare phecodes meeting this criterion, shown in Supplementary Table S2 . Not all phecodes that are rare in the UK Biobank represent phenotypes that meet the definition of rare diseases in the general population. One reason for this is the known bias of the UK Biobank population towards healthier and older participants 26 - 28 , which reduces the apparent prevalence of many diseases, especially severe diseases with early onset. For example, phecode 315.3 “mental retardation” appears in fewer than 200 participants in the UK Biobank even though the disorders it represents are much more common in the general population, which is likely because severe childhood disorders are underrepresented in this cohort of healthy adults. It is also likely that many of these rare phecodes correspond to diagnosis codes that rarely appear in electronic health records (EHR) despite the conditions they refer to being common. Likely examples of this include 367.4 “presbyopia” and 523.1 “gingivitis.” Nevertheless, even if not all of these phecodes represent phenotypes that are rare in the general population, they do represent phenotypes that are rare in the data used to train our model, and the model’s performance on these phecodes is informative about how our methodology handles rare phenotypes. In total, 21,636 of the 436,407 participants tested have one or more of these rare diagnoses, giving them a cumulative prevalence of 5.0%. This matches the estimated cumulative prevalence of 1.5-6.2% for rare diseases in the general population 3 , suggesting that our selection of rare phecodes does accurately capture the population distribution of rare diseases.
After training on a 111,311 -participant subset of UK Biobank data constructed to force each phecode to have prevalence of 50%, we measured RarePT’s performance in the full UK Biobank dataset of 436,407 participants. We arbitrarily chose a threshold of 0.95 in the model’s probability score output, so that participants with a score of 0.95 or higher in a given phecode were treated as positive predictions for that phecode. With this definition, across all five cross-validated models, we generate specific positive predictions for each of our 155 rare phecodes. The number of positive predictions varied by phecode, ranging between 85 and 22,000 with a median of 2,135 positive predictions per phecode ( Supplementary Table S3 ). These positive predictions are broadly distributed across participants rather than being concentrated in a small group of unhealthy participants, with no participant receiving more than 29 positive predictions and 41% of participants (177,484) receiving a positive prediction for at least one of the 155 phecodes. Figure 2 shows the performance of the 5 cross-validated models at predicting rare phecodes in the full dataset, excluding each model’s training data. We measured prediction performance using diagnostic odds ratio (OR), defined as the ratio between the odds of a participant having a diagnosis given a positive prediction from the model and the odds of a participant having a diagnosis given a negative prediction from the model. The median OR for a positive prediction across all 155 rare phecodes and across the five models trained in cross-validation was 48.0. Some specific phecodes reached a median OR over 20,000, and the lowest median OR for any rare phecode was 5.13 ( Figure 2a , Supplementary Table S3 ). These values compare favorably to many commonly used diagnostic tests, where diagnostic odds ratios in the range of 20-50 are considered very good 29 , 30 . Similarly, the positive predictive value (PPV) for cases is nearly 40% for some phecodes, which is well within the range of a useful screening test ( Supplementary Figure S2 , Supplementary Table S3 ). Because PPV depends on the prevalence of the condition within the test population, we expect this number to increase further when applying this method in situations where the prior expectation of encountering a given diagnosis is increased, such as in patients with undiagnosed rare conditions or patients who carry rare genetic variants. Importantly, the predictions are able to distinguish not only between cases and controls but also between cases for one phecode and cases for another, indicating that RarePT is making specific predictions for each phecode rather than measuring general health ( Figure 2b ).
Model trained on UK Biobank is predictive in an independent EHR cohort
We applied the trained RarePT model to an independent dataset derived from the Mount Sinai Data Warehouse (MSDW), consisting of anonymized EHR for a cohort of 3,333,560 patients seen in the Mount Sinai Health System in New York City. We determined phecodes for these participants in the same way as for the UK Biobank participants, but using a mapping designed for the US clinical modification to ICD-10 (ICD-10CM) rather the international standard ICD-10 system used by UK hospitals 24 . 151 of the 155 phecodes determined to be rare in the UK Biobank cohort were present in the MSDW cohort. As a health system based cohort, this cohort is expected to be significantly less healthy than the UK Biobank 31 , and therefore we expect most phecodes to have higher prevalence than in the UK Biobank cohort. Nevertheless, a majority of the rare phecodes we tested (86/151; 57%) still had prevalence less than 1 in 2,000 in the MSDW cohort ( Supplementary Table S2 ). Likewise, we expect more positive predictions for each phecode, both due to the dataset being over 7-fold larger and due to participants being less healthy in general.
For these phecodes in the MSDW cohort, RarePT produced between 100 and 721,000 positive predictions per phecode, with a median of 11,500, and produced at least one positive prediction in 47% of participants (1,518,757). These predictions performed similarly to the predictions for the UK Biobank cohort, with a median OR of 30.6 across all 151 phecodes ( Figure 2a , Supplementary Table S4 ). Performance for individual phecodes was also strongly correlated across the two datasets (Pearson r = 0.456, p = 5.03 × 10 −40 , t-test; Figure 2c ). The fact that performance is similar across the two datasets indicates that RarePT’s predictions are based on features that are robust to different methodologies for sample ascertainment and data collection, rather than features that are only informative in the specialized context of the UK Biobank. This replication is especially remarkable given the extensive differences between the two cohorts: in addition to one being a population-based cohort of healthy volunteers and the other being a health system cohort, these cohorts are also from different countries with different standard medical practices, different billing structures and coding systems, and different distributions of race, ethnicity, and genetic ancestry. This indicates the wide applicability of the RarePT method and suggests that its performance does not depend on specific features of diagnosis coding in a particular health system.
Rare disease predictions are associated with mortality, disease burden, and known diagnostic biomarkers
To further demonstrate that RarePT is capturing clinically relevant signals of disease rather than bioinformatic artifacts related to diagnosis coding, we performed regression analyses to test the association of positive predictions with mortality, disability, and, where available, known diagnostic biomarkers. We retrieved the latest mortality data for UK Biobank participants as of October 2023, and performed Cox proportional hazard regression to test whether a positive prediction is associated with mortality, controlling for age, sex, and self-reported ethnicity. 101 phecodes (65% of phecodes tested) had a significant association (p < 0.05) with increased mortality, of which 93 (60%) remained significant after Bonferroni correction for 155 phecodes (p < 0.00032). The median phecode had a regression coefficient of 0.70, corresponding to a hazard ratio of 2.01, or a twofold increase in mortality rate ( Supplementary Table S5 ).
Next, we estimated Disability Adjusted Life Years (DALY) and its two components, Years of Life Lost (YLL) and Years Living with Disability (YLD), for 80 conditions for all UK Biobank participants 32 . These measurements represent the number of years lost to both mortality and disability as a result of illness and are used as a measure of disease burden, particularly in the Global Burden of Disease study 33 . We performed linear regressions with DALY, YLD, and YLL as the dependent variables to test whether a positive prediction is associated with greater disease burden. In all of these regressions, we controlled for age, sex, and self-reported ethnicity. 113 phecodes (73% of phecodes tested) had a significant association (p < 0.05) with increased estimated DALY, and 106 (68%) remained significant after Bonferroni correction for 155 phecodes (p < 0.00032). 134 phecodes (87%) had a significant association with increased estimated YLD individually, 133 (86%) after Bonferroni correction; 110 phecodes (71%) had a significant association with increased estimated YLL individually, 106 (68%) after Bonferroni correction. For the median phecode, a positive prediction was associated with an increase in estimated DALY of 1.1 years ( Supplementary Table S5 ).
To identify diagnostic biomarkers, we used the SNOMED-CT vocabulary of clinical terms 34 , 35 to identify phenotypes whose clinical definition includes laboratory tests that are available for large numbers of participants in the UK Biobank. We identified 75 defined relationships between 32 rare phecodes and 23 laboratory tests ( Supplementary Table S6 ). These tests were performed as part of the UK Biobank recruitment process and were generally not returned to participants or their physicians, so the availability of a test result does not indicate that it was ordered by a physician and the result of a test was not visible to the physicians responsible for entering diagnoses into the participants’ EHR. Since RarePT makes its predictions using only diagnosis codes and has no access to physician-ordered laboratory tests except through diagnosis codes, this means our model’s predictions are independent of these test results. This is in contrast to health system based cohorts, including our MSDW cohort, where diagnostic tests are ordered and administered in the context of treating the patient, so that the presence and timing of a test are informative about the judgment of the health care providers and the test result forms part of the diagnostic criteria 36 .
For each of these 75 relationships, we performed a logistic regression to test whether a confident case prediction is associated with abnormal test results, again controlling for age, sex, and ethnicity. 54 of these regressions, representing 72% of these relationships, had a result that was in the expected direction and statistically significant (p < 0.05), and 45 (60%) remained significant after Bonferroni correction for 75 regressions (p < 0.00067). The median regression coefficient was 0.57, corresponding to an OR of 1.77. In other words, for the median diagnostic test, a participant with a positive prediction from RarePT had 77% higher odds of having an abnormal test result. In 100 random permutations of phecode-laboratory test relationships, no permutation showed as many Bonferroni-significant associations (p < 0.01) ( Figure 3a , Supplementary Table S7 - S8 ). We additionally performed linear regression for each of these relationships, testing for a relationship between the model prediction and the quantitative test result. 43 of these regressions, representing 57% of these relationships, had a result that was in the expected direction and statistically significant, and 38 (51%) remained significant after Bonferroni correction. Again, 0 out of 100 random permutations showed as many Bonferroni-significant associations (p < 0.01) ( Figure 3a , Supplementary Table S7 - 8 ).
Taken together, these analyses demonstrate that positive predictions from RarePT do not merely predict diagnosis codes for rare diseases, but also capture clinically and biologically relevant features relevant to the diagnoses and to health outcomes more generally.
Disease predictions suggest high rates of underdiagnosis for rare diseases
It has been demonstrated for many diseases, both rare and common, that only a fraction of affected individuals actually have a diagnosis annotated in their EHR 37 - 43 . As a result, it is likely that many of the participants annotated as controls in our dataset are actually undiagnosed cases. In order to evaluate RarePT’s performance in these undiagnosed cases, we repeated the regression analyses of mortality and estimated DALY restricting to participants labelled as controls, so that participants who had the corresponding diagnosis in their EHR were excluded. The mortality analysis produced similar results after excluding known diagnosed cases: 101 phecodes (67% of phecodes tested) had a significant association (p < 0.05) with increased mortality, of which 93 (60%) remained significant after Bonferroni correction for 155 phecodes (p < 0.00032). The median regression coefficient for the proportional hazard regression on mortality was 0.86, corresponding to a hazard ratio of 2.4. The DALY analysis also produced similar results: 114 phecodes (74% of phecodes tested) had a significant association (p < 0.05) with increased estimated DALY, and 106 (68%) remained significant after Bonferroni correction for 155 phecodes (p < 0.00032). 131 phecodes (85%) had a significant association with increased estimated YLD individually, 126 (81%) after Bonferroni correction; 110 phecodes (71%) had a significant association with increased estimated YLL individually, 104 (67%) after Bonferroni correction. For the median phecode, a positive prediction was associated with an increase in DALY of 1.5 years in controls. These results demonstrate that RarePT predictions are associated with health outcomes even when a diagnosis is not present in the EHR, suggesting that RarePT identifies clinically relevant features even in undiagnosed individuals and may be identifying undiagnosed cases.
We next repeated the logistic regression analysis testing RarePT predictions against abnormal test results. As expected, excluding known cases reduced the significance of many, but not all, of these regressions. Nevertheless, 47 of these regressions, representing 63% of these relationships, had a result that was in the expected direction and statistically significant (p < 0.05), and 36 (48%) remained significant after Bonferroni correction for 75 regressions (p < 0.00067). The median regression coefficient was 0.45, corresponding to an OR of 1.57. As with the regressions that included cases, in 100 random permutations of phecode-laboratory test relationships, no permutation showed as many Bonferroni-significant associations (p < 0.01) ( Figure 3b , Supplementary Tables S10 - S11 ). We also repeated the linear regression analysis testing RarePT predictions against quantitative test results, excluding both known cases and participants with abnormal test results. 29 of these regressions, representing 39% of these relationships, had a result that was in the expected direction and statistically significant, with 21 (28%) remaining significant after Bonferroni correction. Again, 0 out of 100 random permutations showed as many Bonferroni-significant associations ( Figure 3b , Supplementary Tables S10 - S11 ). This analysis supports the conclusion that RarePT’s predictions are predictive not only of existing rare diagnoses, but also of undiagnosed cases.
In order to estimate the number of these undiagnosed cases that exist in the UK Biobank dataset, we first identified participants whose test results show that they are unlikely to be undiagnosed cases for a particular phecode. We defined this category of “confirmed controls” as participants whose test results fell within 1 standard deviation of the population mean for a particular test. This is possible because these tests were administered in an unbiased way to a large cross-section of participants, and the presence of a negative test result does not indicate that a physician ordered the test to rule out a suspected diagnosis. We then measured RarePT’s performance based on these confirmed controls and the observed diagnosed cases. Assuming that RarePT performs similarly for unconfirmed controls and undiagnosed cases as for confirmed controls and diagnosed cases, the prevalence of undiagnosed cases can be estimated by comparing the expected number of false positives among unconfirmed controls to the actual number of unconfirmed controls predicted as cases ( Figure 3c , Supplementary Note 1 ).
We estimated the number of undiagnosed cases and the fraction of actual cases that are undiagnosed for each rare phecode with an associated diagnostic test, using a bootstrap sampling procedure to obtain 95% confidence intervals ( Figure 3d - e , Supplementary Table S12 ). The estimated proportion of undiagnosed cases varied widely by phecode, but nearly three-quarters of phecodes tested (23/32 = 72%) had an estimate greater than 20%. Even more remarkably, nearly two-thirds of phecodes tested (20/32 = 63%) had more undiagnosed cases than diagnosed cases, and over a third (12/32 = 38%) had a bootstrap confidence interval entirely above the number of diagnosed cases. The median estimated rate of underdiagnosis across all phecodes tested was 83%, meaning that we estimate 83% of cases are undiagnosed for the median rare phecode. This analysis suggests that there are a very large number of undiagnosed cases of rare diseases in large population biobanks like the UK Biobank. Furthermore, it suggests that RarePT is able to predict some of these hidden undiagnosed cases, allowing them to be identified for the first time. | Discussion
Here we present RarePT, a transformer-based phenotype prediction method designed to predict rare disease diagnoses based on diagnosis codes present in a patient’s electronic health records (EHR). We apply this method to predicting rare disease in the UK Biobank, and find that a very large fraction of rare disease cases are undiagnosed. Our method adds to a growing collection of phenotype prediction methods that use machine learning to clean and extend EHR data for downstream analysis 44 - 48 . Our method is distinct from other approaches in that it focuses specifically on rare disease. It is typically difficult to train machine learning approaches for rare disease because the low prevalence of these diseases limits the availability of training data. We overcame this difficulty using a “masked diagnosis modeling” approach inspired by the approaches used to train AI language models such as BERT 19 . This approach learns about diagnoses by identifying which other diagnoses are most likely to appear in similar contexts, allowing it to learn informative features even for rare diagnoses. In addition to this training strategy, we reweighted our training data to give equal importance to rare and common diagnoses, boosting our power to predict rare diseases.
The trained RarePT model is highly predictive of a wide range of rare disease diagnoses, showing the promise of our deep learning approach as a screening test for specific rare diagnoses that could be applied in a clinical setting in the future. Across all rare phecodes, RarePT’s predictions are associated with a median diagnostic odds ratio (OR) of 48.0 in cross-validation – that is, participants predicted to have a rare diagnosis by our model are 48.0 times more likely to have that diagnosis in their EHR compared to participants without such a prediction. For some specific rare diseases, this performance is even better, with the top 10% of diagnoses achieving a diagnostic OR over 350 in cross-validation and the top 5% achieving a diagnostic OR over 1,500 in cross-validation. These values compare favorably to many diagnostic tests currently in standard clinical use. Remarkably, this performance is replicated in a completely independent cohort of patients at the Mount Sinai Health System in New York, which represents not only a different health system but an entirely different country with different medical practices and different standards for diagnostic coding and billing. The ability to predict rare disease diagnoses in this independent cohort shows the power and transferability of this approach.
In addition to successfully predicting rare diagnoses in participants’ EHR, RarePT also provides new evidence that a substantial number of participants may suffer from rare diseases without a diagnosis appearing in the EHR. This reinforces the known fact that many diagnoses are missing from EHR, due to biases in diagnosis, inconsistent use of billing codes, incomplete or fragmented patient records, and other issues 49 - 53 . This effect has previously been quantified for a variety of diseases, both common and rare, and often found to be substantial. For example, biobank studies have estimated that up to 75% of patients with erythropoetic protoporphyria (EPP) 39 , approximately 85% of patients with familial hypercholesterolemia 54 , and approximately 90% of patients with glycated hemoglobin (HbA 1c ) levels indicating diabetes 42 remain undiagnosed. This is especially problematic for rare diseases, due to the known difficulty of correctly diagnosing rare diseases and the long diagnostic odyssey experienced by many rare disease patients 3 - 6 , 55 . We hypothesized that RarePT would correctly predict many of these undiagnosed cases, causing them to appear as false positives despite actually being correct predictions. For rare diseases where relevant biomarkers were available, these biomarkers consistently showed a significant excess of abnormal values and more extreme values within the normal range in individuals predicted positive by RarePT, supporting the hypothesis that many of RarePT’s predictions are actually undiagnosed cases. Consistent with literature estimates for common diseases, we estimate the prevalence of undiagnosed cases in rare diseases to be remarkably high: 72% of phecodes we tested appeared to have an underdiagnosis rate above 20%, and 63% of phecodes we tested were consistent with a majority of cases being undiagnosed. While these numbers may be higher than the true rate in the general population due to the UK Biobank being biased towards healthier participants, who are less likely to seek out and receive diagnoses than the general population 26 , 27 , 31 , both the existence and magnitude of this phenomenon are consistent with previous results on underdiagnosis of diseases in EHR. The RarePT model allows us to measure this underdiagnosis systematically across a range of rare diseases, which has not previously been possible, as well as to identify specific individuals who may be suffering from rare disease and have a missing or incorrect diagnosis.
There are many potential practical applications for RarePT. One of these is as a phenotype imputation step in a data preprocessing pipeline for downstream bioinformatic analysis, which has previously been unavailable for rare diagnoses 31 , 56 , 57 . Another application is in collecting rare disease cohorts for research studies or drug trials. Due to the rarity of rare diseases, identifying multiple patients with the same disease is a pressing problem in rare disease research, and the international research community has developed several tools to address it 7 - 9 . RarePT can augment or support these tools by allowing researchers to rapidly search EHR for patients who are likely to have a particular disease or patients who are phenotypically similar to another specific patient. It also has the potential to be developed into a clinical screening test for rare diseases, particularly in patients with a specific risk factor such as family history of disease or a genetic risk allele.
There are several limitations and areas of further development for this approach. First, phecodes are designed for phenome-wide association (PheWAS) studies primarily targeting common phenotypes and are not specifically designed to target rare diseases. While some specific rare phenotypes may be inaccessible to RarePT for this reason, recent studies have shown that vocabularies for common disease, including phecodes, do capture information about rare phenotypes 58 - 60 , and we identified phecodes that were rare in both the UK Biobank and MSDW cohorts. Future versions of this approach could increase the resolution for rare phenotypes by using phenotype ontologies designed to represent rare diseases, such as the Human Phenotype Ontology (HPO) 61 or the OrphaNet disease ontology 62 . However, there are tradeoffs involved in this choice. Using a more fine-grained vocabulary for rare disease would dramatically increase the complexity of the model and its computational requirements. The analyses we present here require generation of millions of phenotype predictions, which took several hours of GPU time under the current RarePT architecture and would quickly become infeasible if the phenotype encoding used a complex hierarchical structure with a vocabulary many times larger. Choosing a phenotype encoding scheme that can more precisely capture rare phenotypes could also harm the model’s ability to capture information about rare and common disease in the same vocabulary, which may reduce the power of the model for rare disease and its transferability to other cohorts beyond its training set.
Second, the ICD-10 diagnosis codes we use to derive phecodes are known to be noisy and unreliable 52 , 63 , 64 . We have relied on established methods for automated phenome-wide phenotyping, many of which use diagnosis codes in spite of their limitations because more reliable sources of data are either difficult to access in an automated way or are not available phenome-wide 31 , 53 . This is particularly true for rare diseases, due to the difficulty of finding specialized experts who are capable of reviewing individual patient charts in detail to arrive at a confident diagnosis, particularly at scale 65 - 67 . In spite of this, the use of these automated phenotyping approaches could be a concern in our analysis of undiagnosed cases, since both our identification of participants with a disease diagnosis and our identification of patients with abnormal test results are based on these potentially unreliable automated procedures. It is possible that some of the supposedly undiagnosed cases we identified were actually diagnosed cases where the diagnosis escaped detection by our automated phenotyping process. It is also possible that the availability of certain test results and not others biased our analysis towards specific categories of phenotypes that are not representative of rare diseases in general. For example, blood disorders and metabolic disorders appear to be overrepresented among phenotypes with available diagnostic tests, while neoplasms and neurological disorders are entirely absent ( Supplementary Table S6 ). However, while diagnosis rates may differ by category, there is no reason to suppose that categories with greater availability of diagnostic tests are diagnosed at a lower rate than those with less availability of diagnostic tests. Indeed, if anything, the availability of simple diagnostic tests should increase the rate of diagnosis, making our estimates conservative. It is likely that we could make more reliable determinations of both diagnosis status and true phenotype by making use of other features available in the EHR, such as lab results, vitals, medications, or unstructured physician’s notes. We chose not to include these features in this analysis to avoid circularity in training and analysis, as the diagnoses contained in the EHR are informed by the lab results, vitals, and physician’s notes from the same EHR. Without careful insulation of these different modalities of data, any trained model or statistical analysis is likely to simply recapitulate the physician’s diagnostic criteria without gaining any predictive power for undiagnosed patients, a problem which RarePT avoids by excluding these redundant data sources. Previous studies have also identified undiagnosed cases using a longitudinal study design with direct physician involvement 40 , 41 . RarePT could facilitate this kind of analysis for specific diagnoses in future studies, following up on this broad automated analysis with in-depth analysis of individual diseases incorporating specific clinical expertise.
Finally, there are many opportunities to improve on our model architecture. Deep learning and AI is a rapidly evolving field, and the transformer-based architecture we used for this analysis may not be the optimal way to learn the semantic structure of EHR diagnoses. Recent studies have proposed new ways of derived phenotype embeddings, including extracting them from curated knowledge graphs or from general-purpose large language models pretrained on non-EHR data 68 - 70 . There are also a variety of approaches that have been used to process time series data from EHR in machine learning applications, including neural network models designed for time series data such as recurrent neural networks and using pretrained large language models to process EHR 71 , 72 . While incorporating newer and more sophisticated approaches may improve the model, they may also promote overfitting and reduce transferability of the model across health systems and datasets, as well as slowing down training and prediction. RarePT is both transferable and tractable, essential properties for a method designed to process large health system datasets.
In this paper we have shown that our deep learning phenotype prediction approach, RarePT, is capable of modeling and predicting rare disease diagnoses on a phenome-wide basis, with performance that compares favorably to diagnostic screening tests used in clinical settings. Remarkably, RarePT achieves this performance not only in held-out segments of the UK Biobank cohort it was trained on, but also on an entirely separate cohort of patients in the Mount Sinai Health System in New York City. This demonstrates that the predictive features RarePT uses are not specific to the UK Biobank, but are robust to differences in recruitment strategy, differences in race and ethnicity, and even differences in medical practices and billing procedures between countries. In addition to capturing specific diagnoses, RarePT predictions also predict clinical outcomes, including mortality, quality of life, and specific biomarkers associated with rare disease. Finally, we used predicted phenotypes from the model to estimate the prevalence of undiagnosed rare disease in the UK Biobank, showing that it is likely extremely high. This kind of systematic phenome-wide analysis has not previously been possible for rare diseases, highlighting the utility of RarePT to conduct large-scale studies on rare disease. The high rate of undiagnosed rare disease in large population datasets like the UK Biobank also highlights the need for new methods like RarePT to address the problem of undiagnosed rare disease and suggests a wide range of valuable clinical and research applications. | Author Contributions: Dr. Jordan and Dr. Do had full access to all of the data in the study and take responsibility for the integrity of the data and accuracy of the data analysis.
It is estimated that as many as 1 in 16 people worldwide suffer from rare diseases. Rare disease patients face difficulty finding diagnosis and treatment for their conditions, including long diagnostic odysseys, multiple incorrect diagnoses, and unavailable or prohibitively expensive treatments. As a result, it is likely that large electronic health record (EHR) systems include high numbers of participants suffering from undiagnosed rare disease. While this has been shown in detail for specific diseases, these studies are expensive and time consuming and have only been feasible to perform for a handful of the thousands of known rare diseases. The bulk of these undiagnosed cases are effectively hidden, with no straightforward way to differentiate them from healthy controls. The ability to access them at scale would enormously expand our capacity to study and develop drugs for rare diseases, adding to tools aimed at increasing availability of study cohorts for rare disease. In this study, we train a deep learning transformer algorithm, RarePT (Rare-Phenotype Prediction Transformer), to impute undiagnosed rare disease from EHR diagnosis codes in 436,407 participants in the UK Biobank and validated on an independent cohort from 3,333,560 individuals from the Mount Sinai Health System. We applied our model to 155 rare diagnosis codes with fewer than 250 cases each in the UK Biobank and predicted participants with elevated risk for each diagnosis, with the number of participants predicted to be at risk ranging from 85 to 22,000 for different diagnoses. These risk predictions are significantly associated with increased mortality for 65% of diagnoses, with disease burden expressed as disability-adjusted life years (DALY) for 73% of diagnoses, and with 72% of available disease-specific diagnostic tests. They are also highly enriched for known rare diagnoses in patients not included in the training set, with an odds ratio (OR) of 48.0 in cross-validation cohorts of the UK Biobank and an OR of 30.6 in the independent Mount Sinai Health System cohort. Most importantly, RarePT successfully screens for undiagnosed patients in 32 rare diseases with available diagnostic tests in the UK Biobank. Using the trained model to estimate the prevalence of undiagnosed disease in the UK Biobank for these 32 rare phenotypes, we find that at least 50% of patients remain undiagnosed for 20 of 32 diseases. These estimates provide empirical evidence of a high prevalence of undiagnosed rare disease, as well as demonstrating the enormous potential benefit of using RarePT to screen for undiagnosed rare disease patients in large electronic health systems. | Supplementary Material | Funding/Support:
Dr. Do is supported by the National Institute of General Medical Sciences of the NIH (R35-GM124836). This research has been conducted using the UK Biobank Resource under Application Number 16218. This work was supported in part through the Mount Sinai Data Warehouse (MSDW) resources and staff expertise provided by Scientific Computing and Data at the Icahn School of Medicine at Mount Sinai.
Data availability statement:
Summary data required to generate figures will be deposited in a public repository prior to publication, and are available on request from the authors otherwise. Individual-level data from the UK Biobank and the Mount Sinai Data Warehouse are governed by third-party data use agreements and cannot be made available with this study. Researchers who qualify for access to deidentified data under the policies of the UK Biobank and/or the Mount Sinai Health System can access these data upon application to the respective institutions.
Code availability statement:
Code required to train all models, run all analyses, and generate all figures will be published in a public repository under an open-source license prior to publication, and is available upon request from the authors otherwise. | CC BY | no | 2024-01-16 23:49:21 | medRxiv. 2023 Dec 24;:2023.12.21.23300393 | oa_package/f0/75/PMC10775679.tar.gz |
|
PMC10775685 | 38196663 | Materials and Methods
Research Subjects
Blood samples and clinical, biological, and questionnaire data were collected after obtaining written informed consent. Genetic data for this project originated from 3 independent cohorts: (1) the Malta Next Generation Sequencing (NGS) Project (ethics approval: 031/2014) that uses high throughput sequencing (HTS) to investigate selected conditions in the Maltese (n = 146; 17 of whom form part of the IHH cohort, diagnosed [ 26 ] and recruited through the Reproductive Endocrinology Clinic at Mater Dei Hospital, Msida, Malta); (2) the Maltese Acute Myocardial Infarction (MAMI) study [ 27 ], which includes individuals between the ages of 18 and 77 years and for which extensive phenotypic and biochemical data, including hormone levels, are available (n = 1098; ethics approval: 32/2010). Of these, 423 are cases with a first myocardial infarction (MI) recruited at time of MI and recalled again at least 6 months post first MI, 210 are relatives of the cases, and 465 are population controls; and (3) an anonymous cord blood population cohort of Maltese neonates collected over a period of 3 months using a convenience consecutive sampling approach (n = 493; ethics approval: 44/2014). For the NGS project and the MAMI study samples, medical, demographic, and lifestyle data was collected through an extensive interviewer-led questionnaire. Serum and plasma samples were collected at time of recruitment and recoded prior to testing. All research subjects for the NGS project and the MAMI study have parents and grandparents of Maltese ethnicity.
DNA Analysis
DNA was extracted from EDTA whole blood or buffy coats using the salting out method [ 28 ]. Several methods were employed to genotype the GNRHR variant (NM_000406.2: c.317A > G; P30968: p.Q106R) in the different sample cohorts.
For the Malta NGS project, a custom targeted panel HTS approach that selectively captures all exons and up to 500 bp of flanking regions of the gene was used. GNRHR was among the selected genes on this panel. DNA libraries were constructed according to the Agilent SureSelect XT Target Enrichment System for Illumina Paired-End Sequencing Library Protocol Version B.3. Libraries were subsequently sequenced on an Illumina Hi-Seq 4000 platform at BGI, Hong Kong. HTS data was aligned to GRCh37 and variants called using NextGENe v.2.4.2.3 (SoftGenetics, LLC). Variants were filtered on a minimum of 30-fold coverage.
A restriction enzyme digest with Xcml (NEB, UK) was used to genotype the neonatal cord blood collection [ 29 ]. The enzyme cleaves the wild-type but not the alternative allele. Restriction fragments were separated on 2% agarose gel and genotypes confirmed by Sanger sequencing.
For the MAMI study collection, the Kompetitive Allele Specific PCR (KASPTM) genotyping assay for GNRHR p.Q106R was carried out at LGC Genomics, Germany. This assay is based on competitive allele-specific PCR that allows for bi-allelic scoring of the single nucleotide variant. This is quantified on the basis of fluorescence resonant energy transfer chemistry [ 30 ]. The controls were tested for Hardy–Weinberg equilibrium using Fisher's exact test, and the genotype and allele frequencies were calculated. Genotypes were confirmed by Sanger sequencing and by comparison to HTS datasets.
Hormone Assays and Fertility Data
Hormone assays were carried out on serum samples from the MAMI study collection. To limit diurnal variation, blood from overnight fasting subjects was always drawn in the morning between 0800 hours and 1000 hours. Samples were processed, aliquoted, and stored at −80 °C within 90 minutes of collection. At time of measurement, the frozen aliquots were utilized immediately after thawing. Samples were only thawed once.
Diagnostic enzyme-amplified chemiluminescent immunoassays (Immulite 2000 System, Siemens, USA) were carried out on all samples in the MAMI study collection to determine the concentration of several hormones that have a role in the hypothalamic-pituitary axis. Instrument calibration was carried out weekly for testosterone and every 4 weeks for the remaining hormones. Before each run, internal quality control checks were carried out. The hormones tested included estradiol (women only; Siemens Cat# L2KE22, RRID: AB_2936944), total testosterone (men only; Siemens Cat# L2KTW2, RRID: AB_2756391), lutenising hormone (LH; Siemens Cat# L2KLH2, RRID: AB_2756388), follicle stimulating hormone (FSH; Siemens Cat# L2KFS2, RRID: AB_2756389), sex hormone-binding globulin (SHBG; Siemens Cat# L2KSH2, RRID: AB_2819251), dehydroepiandrosterone sulfate (DHEA-SO 4 ; Siemens Cat# L2KDS2, RRID: AB_2895591), prolactin (Siemens Cat# L2KPR2, RRID: AB_2827375), cortisol (Siemens Cat# LKCO2, RRID: AB_2810257), thyroid stimulating-hormone (TSH; Siemens Cat# LKTS1, RRID: AB_2827386), free T4 (Siemens Cat# LKFT41, RRID: AB_2827385), and growth hormone (GH; Siemens Cat# L2KGRH2, RRID: AB_2811291). All quantitative measurements were carried out according to the manufacturer's instructions. Hormone measurements that fell outside the detection limit of the kit were subsequently omitted from the analysis. The exception was estradiol, where nondetectable values (denoted ND) are given as the lower limit of the range and considered to be part of the reference range by the assay. Calculated free testosterone was computed from total testosterone and SHBG using the Vermeulen equation [ 31 ]. Since albumin measurements were not available, a standard value of 43 g/L was used.
Age, sex, and data associated with puberty and fertility including age of menarche and frequency of menstruation and need and use of fertility treatment in women, number of offspring, and marital status were obtained from an interviewer-led questionnaire. We did not assess puberty and fertility in men through direct questions but used family tree information to determine number of offspring. Individuals who previously had cancer, an oophorectomy, or hysterectomy at pre- or perimenopausal age and those on hormone replacement therapy were omitted from this analysis, resulting in 978 samples: 739 men and 239 women for whom demographic, genetic, and hormone data were available.
Statistical Data Analysis
Statistical analyses were conducted using SPSS Statistics version 25 (IBM Corp. Armonk, NY, USA). The neonatal collection and MAMI control group were tested for the proportion of genotypes and deviation from the Hardy–Weinberg equilibrium using Fisher's exact test. Nonparametric tests were used for hormone analysis due to the nonnormal distribution of the data. The Kruskal–Wallis test was used to compare 3 or more independent samples of unequal sample sizes, and the Mann–Whitney test was used to carry out pair-wise comparisons. The α threshold employed was arbitrarily set at P < .05.
Prior to hormone analysis, medians and P -values were calculated for all hormones by sex, control-case-relative status, and 10-year age groups to identify whether sex, age, or case-control-relative status influenced hormone levels. Median hormone levels were only found to be statistically different between men and women. An age trend was also observed in DHEA-SO 4, GH, SHBG, and free testosterone in men and DHEA-SO 4 , estradiol, FSH, and LH in women. Thus, data is presented separately for men and women and subdivided into <50 and ≥50 years of age. For DHEA-SO 4 , SHBG, estradiol, LH, and FSH in women, data is subdivided into pre-/perimenopausal and postmenopausal status.
Scatter plots for hormone levels by genotype across the different age groups were constructed using GraphPad Prism version 8.0.2 (GraphPad Software Inc, San Diego, CA, USA). | Results
Analysis of HTS data of 146 research subjects from the Malta NGS project identified 8 heterozygotes harboring the GNRHR p.Q106R variant. Four of these (2 of whom were parent and offspring) formed part of the IHH cohort (n = 17), while the other 4 did not have any IHH characteristics, nor a family history of IHH. A local population study on a cord blood DNA collection (n = 493) representative of the current Maltese population identified 2 homozygous alternative and 25 heterozygous individuals, all unrelated, translating to a population MAF of 0.029 [ 29 ]. This is considerably higher than the MAFs of the European population or its subpopulations, with the lowest reported variant frequency being 0.0012 in the Estonian population and the highest being 0.0051 in the southern European population [ 32 ]. For a variant that is reported to be pathogenic in multiple literature accounts, its frequency is overrepresented in the Maltese population, and this led us to evaluate the effect of GNRHR p.Q106R heterozygosity on fertility and hormone profiles in an adult cohort.
Clinical and Demographic Characteristics
From 978 eligible individuals of the MAMI study collection, 43 (4.4%) were heterozygous for GNRHR p.Q106R ( Fig. 1 ). These consisted of 26 men with ages ranging from 27 to 68 years and 17 women between the age of 23 and 75 years. The genetic variant was in the Hardy–Weinberg equilibrium in controls with MAF = 0.033. From the 43 heterozygotes, 35 had offspring. In heterozygous women ( Table 1 ), menarche occurred between the ages of 9 and 14 years. Menstrual bleeding lasted 3 to 8 days, and the number of days elapsed between each cycle ranged from 25 to 34 days; only 1 woman required fertility treatment. Of the 8 heterozygotes without offspring (3 women, ages 23, 23, and 27 years, and 5 men, ages 27, 31, 53, 59, and 68 years), only 2 men aged 31 and 68 years are, or have been, in a steady relationship.
Hormone Assays
Upon analysis of research subjects grouped into 10-year age groups (data not shown), we observed a steady decline in DHEA-SO 4 levels with age in both sexes, an increase in SHBG levels with age in men, an age-dependent decrease in free testosterone in men and estradiol in women, and an overall age-dependent FSH and LH increase in women up until menopause. These trends are reflective of the natural biological aging process [ 33-35 ].
TSH levels were lower in heterozygous men older than 50 years of age compared to levels in homozygous wild-type men ( P < .01; Fig. 2 , Table 2 ). A similar trend was also observed in men below age 50; however, this did not reach statistical significance possibly due to the small number of heterozygotes. In women the difference was much smaller and did not reach statistical significance in any of the age groups. There were no statistically significant differences in median levels between wild-type individuals and heterozygotes for prolactin, GH, DHEA-SO 4 , cortisol, SHBG, free T4, LH, and FSH in both sexes, estradiol in women, and free testosterone in men. The only statistical difference ( P < .01) was for TSH in men above age 50 where wild-type individuals had higher levels than heterozygotes.
The 43 GNRHR p.Q106R heterozygotes in the collection were spread across different age groups with a comparable proportion of heterozygotes to wild-type homozygotes in each age group for both sexes. Across the different hormones, there were no notable differences or trends between the measured hormone levels of both genotypes and the reference ranges as most data points fell within the local hospital reference ranges ( Figs. 2 – 5 , Table 3 ). | Discussion
Based on available fertility data ( Table 1 ) and hormone profile analyses ( Table 2 ) of GNRHR p.Q106R heterozygote and wild-type individuals, we report no differences between the 2 genotypes. In fact, there were no differences in median levels of the reproductive hormones (LH, FSH, estradiol, free testosterone, SHBG, and DHEA-SO 4 ) between wild-type and heterozygous individuals ( Table 2 ). The only difference in median hormone levels observed was a lower median TSH level in heterozygous men 50 years and older. However, 1 out of 19 men in this category falls outside the acceptable normal reference ranges for TSH (0.3–3 mIU/L), compared to 3 out of 469 wild-type men aged 50 and above. Questionnaire data from our heterozygous cohort shows that none of the individuals reported a thyroid disorder when specifically asked during the questionnaire interview. Low TSH levels coupled with normal free T4 and T3 levels are consistent with subclinical hyperthyroidism, and it is well documented that thyroid malfunction may disrupt menstrual patterns and ovulation disorders in women and may also cause infertility in both sexes [ 36 , 37 ]. A study on individuals with polycystic ovary syndrome does suggest that there are pathophysiological links between the GNRHR locus and thyroid function [ 38 ]. However, more evidence than what the current literature supports is needed to show how GnRH neurons and thyroid hormones interact [ 39 ]. Furthermore, low TSH levels measured by immunoassays can also be due to interference by biotin [ 40 , 41 ] or by endogenous antibodies [ 41-43 ].
Our findings indicate that the onset of puberty (in women) or the likelihood for an individual to bear offspring is not influenced by being heterozygous for GNRHR p.Q106R. This is corroborated by the high frequency of heterozygotes who had offspring, which is reflective of expected frequencies in a healthy population. In women, collection of data pertaining to the age at menarche was also particularly important since constitutional delay of growth and puberty and IHH form part of the same GnRH deficiency spectrum with shared pathogenic mechanisms and similar clinical phenotypes caused by variants at overlapping genetic loci [ 44 ]. Together with IHH, late menarche and constitutional delay of growth and puberty have been previously associated with homozygous GNRHR partial loss-of-function variants [ 15 , 45 ]. Mild IHH phenotypes such as secondary hypothalamic amenorrhea may manifest if extraneous stressors such as extreme weight variation, strenuous exercise, or psychological stress become present in tandem with defects in the biology of GnRH attributed to monoallelic GNRHR variants [ 46-48 ]. This gene/environment interaction has also been observed for FGFR1 , ANOS1, and PROKR2 [ 48 , 49 ]. In a cohort of GnRH-deficient patients (n = 397), Sykiotis et al identified Caucasian individuals harboring monoallelic autosomal recessive GNRHR mutations in 6% of the patient cohort, suggesting that when such variants in heterozygosity form oligogenic interactions due to endogamy or chance, inhibition of the hypothalamic-pituitary-gonadal axis occurs leading to IHH phenotype manifestations [ 49 ].
In a Maltese newborn cord-blood collection, we found the GNRHR p.Q106R variant (MAF = 0.029; n = 493) to be 10 times more frequent than that of the global population (MAF = 0.003; n = 282 638) and 6 times more frequent than the southern European population (MAF = 0.005, n = 11 596 [ 32 ]). We suspect that the high GNRHR p.Q106R carrier frequency in the Maltese population is due to a founder effect and contributes to a high prevalence of autosomal recessive IHH locally. This variant has already been described to have founder attributes for heterozygote carriers in other populations (European, North and South American, and South Asian) [ 19 , 50 ].
While pathogenic variants are gradually eliminated from the human gene pool, founder variants tend to persist and are passed down through the generations [ 19 ]. However, counterintuitive to natural selection, this persistence stems from heterozygotes being relatively protected against certain diseases with a selective advantage over their homozygote alternative and wild-type counterparts. It has been proposed that variants like GNRHR p.Q106R may play a role in impairing the reproductive function of a population during adverse temporal circumstances such as states of destitution, environmental calamities, climate change, and population migration that are disadvantageous and taxing on the energy-demanding needs of pregnancy and survival [ 51 ]. Reports of reversible functional hypothalamic amenorrhea in women with heterozygous variants in IHH-related genes [ 47 , 48 ] support this hypothesis. This may have imparted evolutionary advantageous properties to women and their future offspring with flexibility and reversibility of the hypothalamic-pituitary-gonadal axis to resume GnRH function during more favorable conditions. For this reason, such founder variants remain conserved within the gene pool [ 52 ].
There are multiple reports of compound GNRHR heterozygous patients in whom IHH can be explained by the presence of 2 different GNRHR variant alleles [ 23 , 53 , 54 ]. However, the presence of the monoallelic GNRHR p.Q106R in multiple individuals from the MAMI cohort with normal hormone levels, puberty, and fertility reinforces the premise that additional deleterious variants in other genes must be present in IHH patients in whom only a single GNRHR variant allele is identified. These additional variants would adversely modulate GnRH signaling through digenic or oligogenic inheritance [ 49 , 55 , 56 ]. Depending on the number and the nature of other contributing genes and alleles that partake in this mode of inheritance, one may expect variable degrees of expressivity in the reproductive and pathophysiological phenotypes of the condition. This does not exclude potential undiscovered genes from interacting pleiotropically with GNRHR [ 49 , 57 , 58 ].
In this study we show that GNRHR p.Q106R heterozygotes do not have fertility issues or impaired gonadotropin and sex steroid hormone levels. Thus, GNRHR heterozygotes who exhibit IHH characteristics must have at least 1 other variant in a different IHH causative gene. Clinically this is an important consideration, particularly for diagnostic laboratories making use of gene panels, since the IHH phenotype of individuals with monogenic GNRHR variants cannot be explained solely by this heterozygosity. | Abstract
Context
The gonadotropin-releasing hormone receptor variant GNRHR p.Q106R (rs104893836) in homozygosity, compound heterozygosity, or single heterozygosity is often reported as the causative variant in idiopathic hypogonadotropic hypogonadism (IHH) patients with GnRH deficiency. Genotyping of a Maltese newborn cord-blood collection yielded a minor allele frequency (MAF) 10 times higher (MAF = 0.029; n = 493) than that of the global population (MAF = 0.003).
Objective
To determine whether GNRHR p.Q106R in heterozygosity influences profiles of endogenous hormones belonging to the hypothalamic-pituitary axis and the onset of puberty and fertility in adult men (n = 739) and women (n = 239).
Design, Setting, and Participants
Analysis of questionnaire data relating to puberty and fertility, genotyping of the GNRHR p.Q106R variant, and hormone profiling of a highly phenotyped Maltese adult cohort from the Maltese Acute Myocardial Infarction Study.
Main Outcome and Results
Out of 978 adults, 43 GNRHR p.Q106R heterozygotes (26 men and 17 women) were identified. Hormone levels and fertility for all heterozygotes are within normal parameters except for TSH, which was lower in men 50 years or older.
Conclusion
Hormone data and baseline fertility characteristics of GNRHR p.Q106R heterozygotes are comparable to those of homozygous wild-type individuals who have no reproductive problems. The heterozygous genotype alone does not impair the levels of investigated gonadotropins and sex steroid hormones or affect fertility. GNRHR p.Q106R heterozygotes who exhibit IHH characteristics must have at least another variant, probably in a different IHH gene, that drives pathogenicity. We also conclude that GNRHR p.Q106R is likely a founder variant due to its overrepresentation and prevalence in the island population of Malta. | Idiopathic hypogonadotropic hypogonadism (IHH) is a rare genetic disorder that is characterized by partial or complete absence of pubertal development. Dysfunction of the hypothalamic-pituitary-gonadal axis leads to disorders of development, sexual maturation, and reproduction. Disruption in migration, differentiation or activation of GnRH neurons and/or a disruption in production, pulsatile secretion, or action of GnRH [ 1-3 ] lead to varying degrees of gonadotropin secretion and subsequently hypogonadism as determined by low testosterone levels and absence of gametogenesis. It is classically termed Kallmann syndrome if it presents with anosmia or hyposmia, and normosmic IHH if the sense of smell is not impaired [ 4 ].
The GnRH decapeptide that is released from specialized neurons in the medio-basal and anterior hypothalamus binds to GnRH receptors located in the adenohypophyseal gonadotrope cell membranes. Once activated, these cells synthesize and release gonadotropins: FSH, and LH [ 5 , 6 ]. Gonadotropin secretion stimulates gametogenesis and gonadal steroidogenesis [ 7 , 8 ]. However, pathogenic variation in the GnRH receptor ( GNRHR ; OMIM 138850; phenotype MIM 146110; GRCh38 genomic coordinates: 4:67 737,117–67 754 387) may compromise the interaction with GnRH and result in IHH [ 8-11 ]. GnRH or GnRH receptor variants may also lead to abnormalities of downstream signalling such as intracellular calcium ion fluxes, cAMP signalling, inositol triphosphate generation, DNA transcription, synthesis, and secretion of gonadotropins [ 7 , 12 ].
The GnRH receptor is a 328 amino acid long polypeptide transmembrane rhodopsin-like G protein-coupled receptor. GNRHR was among the first genes to be implicated in IHH, and it is classically associated with a recessive mode of inheritance, where heterozygous individuals are generally asymptomatic [ 13-15 ]. Most reported variants with a suspected functional effect in GNRHR are missense changes that impede GnRH signaling. There is an increasing amount of data suggesting that heterozygous variants may lead to diminished receptor signaling through various mechanisms. These include the loss or reduction of receptor expression due to the rerouting of misfolded proteins toward degradation pathways instead of localization to the cell membrane, a decrease in ligand binding because of variants in the ligand binding pocket, or a decrease in G-protein coupling or signalling due to variants in the signaling domains [ 9 , 15 , 16 ]. It has been demonstrated that the prevalence of heterozygous GNRHR variants is statistically significantly higher in individuals with GnRH deficiency when compared with controls (2.5% in cases, 0.5% in controls, P < .01), suggesting that monoallelic expression might impact reproductive function possibly when combined with other genetic or environmental factors [ 15 ]. While dominant-negative effects of monoallelic GNRHR variants have been previously described in vitro , one cannot ignore the possibility that in vivo either oligogenicity or nongenetic factors may also play a role, and that additional genes that play a role in IHH are still to be determined [ 17 , 18 ].
The most frequently reported pathogenic GNRHR variant across different ethnic groups is p.Q106R (rs104893836) [ 10 , 19 , 20 ]. This glutamine to arginine substitution at residue 106 sits on the first extracellular hydrophobic loop of the G protein-coupled receptor [ 21 ]. The variant causes a conformational change in the receptor and diminishes ligand binding and receptor activation, resulting in partial loss-of-function [ 10 , 13 , 22 ]. In vitro expression data shows that the p.Q106R variant receptor needs 50 times higher levels of the ligand for half-maximal production of the secondary messenger inositol phosphate (IP3) [ 23 ]. Additionally, this variant also compromises cAMP signaling pathways and the activation of extracellular signal-regulated kinase (ERK) [ 24 , 25 ]. Downstream to these pathways, the stimulation of the pituitary gonadotropin alpha-subunit, LH-beta, and FSH-beta gene promoter activity and expression are also reduced when compared to the wild-type [ 25 ].
Here we report an atypically high frequency of GNRHR p.Q106R in the Maltese population, which presented an opportunity to conduct an endocrine assessment of heterozygous individuals between 18 and 77 years old. We also investigate reported puberty and fertility parameters in the identified GNRHR p.Q106R heterozygotes. | Acknowledgments
Ms. Anna Lisa Sciortino is acknowledged for overseeing biochemical assays carried out on the IMMULITE 2000 and Ms. Simona Maria Pagano and Ms. Dorianne Cassar for assisting with LH and FSH assays.
Funding
This work was funded by the MAMI study (R&I-2008-006, a collaboration between the University of Malta and the Malta Department of Health) and the Malta NGS project (R&I-2012-024, a collaboration between the University of Malta and the Department of Pathology at Mater Dei Hospital), both supported by national funding through the R&I program administered by the Malta Council for Science and Technology awarded to S.B.W. and a Research Excellence Grant (I21LU04) of the University of Malta awarded to R.F. The research work disclosed in this publication was also partly funded by the Tertiary Education Scholarships Scheme awarded to C.J.A.
Disclosures
The authors have nothing to disclose or declare that could be perceived as prejudicing the impartiality of this study.
Data Availability
Some data sets generated during and/or analysed during the current study are not publicly available but are available from the corresponding author upon reasonable request. | CC BY | no | 2024-01-16 23:36:47 | J Endocr Soc. 2023 Dec 29; 8(2):bvad172 | oa_package/49/a4/PMC10775685.tar.gz |
||
PMC10776312 | 38205163 | Methods
Cell Cultures, Sample Preparations, and Reagents
The detailed steps for cell culture were in accordance with those described in our previous studies. Briefly, keratinocytes (HEK, Cat # 2110, ScienCell) were cultured in normal (6mM) and high (30mM) glucose media for 24 hours respectively, to simulate diabetic conditions. Similarly, the human immortalized keratinocyte (HaCaT) cell line (Cat #CL-0090, Procell) was cultured in MEM (Cat #PYG0029, Boster) and DMEM (Cat#DZPYG0209, Boster) medium for 24 hours respectively, to simulate diabetic conditions. Total RNA from the cell samples was extracted using RNAiso Plus reagent (Cat # 9108, Takara). Total protein was extracted from the cells using RIPA lysis buffer (Cat # P0013K, Beyotime) following the manufacturer's protocol. Supernatants were collected.
Acquisition and Preparation of Skin Tissues
A total of 10 participants, comprising 5 subjects with concomitant diabetes mellitus and 5 control subjects without diabetes participated in this study. All of the included patients with diabetes mellitus had a history of chronic diabetes, with duration longer than 10 years. Skin tissues from patients with and without diabetes were collected during surgery. All of the participants provided written informed consent (Huazhong University of Science and Technology Ethics Committee, [2022] No. 3110). In addition, we created a type 2 diabetes mouse model and purchased db/db mice (Cat# HM0046, Shulb) to collect their skin tissues. This project was approved by the Wuhan Union Hospital Ethics Committee (S1983). A high-fat diet combined with streptozotocin injections was used to establish a diabetic mouse model, which is considered suitable for inducing the hallmark features of human type 2 diabetes [ 22 ]. Briefly, after fasting for 12 hours, streptozotocin (STZ; 120 mg/kg, Cat # S1312, Selleck) was administered intraperitoneally following a high-fat diet for 4 weeks. Glucose readings of nonfasted mice were recorded every 5 days. Thereafter, the mice were euthanized, and the dorsal skin was harvested for further analysis. All animals received care in compliance with the Principles of Animals Use Committee (NIH Publications No. 8023, revised 1978), and animals experiments adhere to the ARRIVE guidelines 2.0. The project was approved by the Wuhan Union Hospital Ethics Committee (REC 08/H1202/137), China.
Quantitative Real-Time Reverse-Transcriptase Polymerase Chain Reaction
The cDNA was synthesized from RNA using a PrimeScript RT Reagent Kit (Cat # RR037A, Takara). SYBR Premix Ex Taq (Cat #RR420A, Takara) was used for quantitative polymerase chain reaction (qPCR) on an ABI Step One Plus System (Applied Biosystems, Foster City, CA, USA). The primers used were KRT17, 5′-CCCAGCTCAGCATGAAAGCA-3′ (forward), and 5′-ACAATGGTACGCACCTGACG3′ (reverse). All the primers were purchased from Sangon Biotech. The mRNA levels of the target genes were normalized to GAPDH (glyceraldehyde 3-phosphate dehydrogenase) using the 2−DDCT method.
Western Blotting
Total proteins from HEK and HaCaT cells in 6-well plates and tissues were extracted using radioimmunoprecipitation assay (RIPA) lysis buffer containing 2% phenylmethylsulfonyl fluoride (PMSF) and phosphatase inhibitor (Cat # P0013K, Beyotime). Proteins were separated using 10% sodium dodecyl sulfate–polyacrylamide gel electrophoresis and transferred onto a nitrocellulose membrane. Protein probing was performed by overnight membrane incubation at 4 °C with primary antibodies (KRT17, Cat #18502-1-AP, Proteintech, RRID: AB 10644296). The membrane was then washed with TBST and incubated with horseradish peroxidase-conjugated secondary antibody (Cat# AS014, ABclonal, RRID:AB 2769854) for 2 hours. Blots were developed using enhanced chemiluminescence reagent (Cat # MA0186, Meilunbio), and band intensities were analyzed using ImageJ software (NIH, USA).
Enzyme-Linked Immunosorbent Assay
The expression levels of KRT17 in the supernatant of HEK and HaCaT cells were measured using an enzyme-linked immunosorbent assay (ELISA) kit (Cat #, JL15259-48T, Jiang Lai Biotech) and a microplate reader (Thermo Fisher Scientific, USA). The ELISA was performed according to the manufacturer's instructions. The primary antibody of KRT17 in the kit was from Proteintech company (RRID: AB 10644296).
Immunohistochemistry
All staining of skin samples was conducted by the Biossci Company (Hubei, China). The skin samples were fixed in 4% paraformaldehyde, washed in phosphate-buffered saline (PBS), dehydrated, and embedded in paraffin. Tissue sections were treated with primary antibodies (KRT17, Cat #18502-1-AP, Proteintech, RRID: AB 10644296), followed by incubation with the appropriate secondary antibodies (Cat# PR30009, Proteintech, RRID:AB 2934294). DAB (DAKO) was used to visualize the reaction, followed by counterstaining with hematoxylin. The sections were then analyzed for red color using a light microscope.
Stimulation of Recombinant Human Cytokeratin 17
HDFs (Cat # 2320, ScienCell) were cultured in normal glucose (8 mM) and seeded in 6-well plates. At 60% to 70% confluence, the cells were starved in serum-free medium for 12 hours before stimulation with 0.1, 1, and 10 ng/mL recombinant human cytokeratin 17 (KRT17, Cat#PRO-1883, ProSpec).
Observation of Cell Morphology and Growth
The growth of cells stimulated with different concentrations of KRT17 was observed and recorded daily under an inverted microscope.
Cell Proliferation Assay
Cell proliferation was assessed using the Cell Counting Kit-8 (CCK8, Cat #CK04-500 T, Dojindo). HDF were seeded in 96-well plates at a density of 1 × 10 4 cells/well before the replacement of fresh media containing different concentrations of KRT17 (0, 0.1, 1, and 10 ng/mL). After 24 hours, 48 hours, and 72 hours, cell proliferation was measured using the CCK-8 assay. Briefly, a total of 10 μL CCK-8 solution was added to each well containing 100 μL of culture medium and incubated for 1 hour at 37 °C. Finally, absorbance was measured at 450 nm using a microplate reader (Thermo Fisher, USA).
Cell Migration Assay
Cell migration was assessed by scratch and Transwell assays. HDF cells were seeded in 3 12-well plates for each treatment. When the cell confluency reached 90% to 95%, 3 different scratches were made in each well. A scratch was made using a 20-μL pipette tip placed along the diameter of the well. Then, cells were starved in serum-free culture medium for 24 hours. Cells were washed with PBS to remove the scratched cells. Fresh complete culture medium containing different concentrations of KRT17 (0 and 1 ng/mL KRT17) was added. The images were then acquired from the same area for each treatment condition at 0, 12, and 24 hours. For the Transwell assay, 24-well Transwell chambers (Corning) were used. HDF cells were seeded into the upper layer in basal medium without fetal bovine serum (FBS) at a density of 1 × 10 5 cells/well while the lower chamber was filled with different concentrations of KRT17 FM containing 10% FBS. Invading cells were fixed and quantified after 24 hours of incubation.
RNA Extraction and RNA Sequencing Procedures
RNAs were extracted from cultivated HDF cells using RNAiso Plus (Cat # 9108, TaKaRa). RNA sequencing (RNA-seq) and RNA-seq analyses were performed using commercially available service (service ID # F21FTSCCWGT0114, BGI, Wuhan, China). Briefly, total RNA was extracted and mRNA was enriched using oligo (dT) beads for library construction. After library construction, a qualified library was selected for sequencing. Following sequencing of each cDNA library, the raw sequencing data were transformed into the original sequence data, termed raw data or raw reads. Raw sequencing read QC and filtering were performed using Fastp. After filtering, clean reads were aligned against the reference genome. Clean reads were processed using downstream analysis, including gene expression and deep analysis based on gene expression. The sequencing data that support the findings of this study have been deposited into CNGB Sequence Archive (CNSA) of China National GeneBank DataBase (CNGBdb) with accession number CNP0004789.
Functional Enrichment Analysis
Normalization and differential expression analyses were estimated from count data using the DEGseq package in the analysis system of Dr. Tom from BGI. Differentially expressed genes (DEGs) were screened using FDR-adjusted q values (q values ≤ 0.05), and fold changes ≥1.2. DEGs were extracted for GO functional enrichment analysis and KEGG pathway enrichment analysis using the analysis system of Dr. Tom from the BGI. Data visualization was performed using a bubble diagram.
Immunofluorescence Staining
HDF were seeded on circular coverslip slides in 24-well plates at a density of 5000 cells/well and cultured. After treatment with 1 ng/mL KRT17 complete medium for 24 hours, HDF were washed twice with PBS, fixed with 4% paraformaldehyde for 30 minutes, and rinsed twice with PBS. The cells were then treated with 0.1% Triton-X 100 for 10 minutes. Slides were blocked with 5% goat serum (Boster, Wuhan, China) for 1 hour, and incubated with integrin alpha 11 (ITGA11) primary antibodies (1:50, Cat # A10084, ABclonal Technology, RRID: AB 2757608) overnight at 4 °C. HaCaT cells were incubated with secondary antibodies (Cat# SA00003-2, Proteintech, RRID:AB 2890897) labeled with FITC (Green) (1:200, Servicebio) at 37 °C for 1 hour in the dark. The cells were subsequently stained with DAPI (blue) for 5 minutes in the dark. Images were acquired using a fluorescence microscope (Bio-Rad Laboratories).
Statistical Analyses
Data are expressed as the mean ± SEM. Parametric and non-parametric quantitative variables were compared using the Student t test and Mann–Whitney U-test, respectively. The least significant difference (LSD) method in one-way ANOVA was used for pairwise comparisons between different groups. Statistical significance was set at P < .05. All figures were generated using GraphPad Prism 9.0 and Adobe Illustrator CC 2015. | Results
Increased KRT17 Expression in HEK and HaCaT Cultures Under High Glucose
Although KRT17 mRNA expression was increased in our previous study using RNA sequencing–based transcriptome analysis, measuring gene expression in normal and high glucose–stimulated skin cells, the activity of KRT17 was indeterminate. We further investigated the changes in the protein expression of KRT17 using 2 cell lineages under high conditions. In the high glucose–stimulated HEK cell model, qPCR results showed increased expression of KRT17 mRNA ( Fig. 1A ), and Western blot results showed that KRT17 protein levels were higher ( Fig. 1B and 1C ), and the protein expression levels of KRT17 in the supernatant were increased ( Fig. 1D ). The same test showed increased KRT17 expression in the HG-stimulated HaCaT cell model ( Fig. 1E-1H ).
Assessment of Diabetic Animal Models
In this study, 5 of the 7 mice were successfully established as diabetes models, defined as having glucose levels above 14 mmol/L (20, 21). The last measurement of plasma glucose concentration was used to calculate the average value (21.7 [19.8-22.3] mmol/L) for these diabetic mice.
Increased KRT17 Expression in Diabetic Mouse Skin
To further elucidate the relationship between KRT17 and diabetes status, we investigated the expression of KRT17 in the skin tissue of diabetic mice. In the skin tissue of db/db diabetic mice, the qPCR results showed increased expression of KRT17 mRNA compared to that in the normal group ( Fig. 2A ). The Western blot results showed that KRT17 protein levels were higher ( Fig. 2B and 2C ), and the immunohistochemistry results showed that KRT17 was increased ( Fig. 2D-2E ). The same test showed increased KRT17 expression in the diabetic mouse model, high-fat diet combined with streptozotocin (HFD/STZ) ( Fig. 2F-2J ).
Increased KRT17 Expression in Skin of Patients With Diabetes
Furthermore, we tested the expression of KRT17 in the skin of patients with diabetes. The qPCR results showed that the expression of KRT17 mRNA was increased in the control skin ( Fig. 3A ). Increased KRT17 protein expression was observed in skin of patients with diabetes by Western blotting ( Fig. 3B and 3C ) and immunohistochemistry ( Fig. 3D and 3E ) tests.
Proliferation and Migration Effects of KRT17 on Dermal Fibroblasts
To clarify the effect of KRT17 on skin, we designed an in vitro stimulation experiment of HDF by KRT17 to explore its effect on dermal function. To investigate the effect of KRT17 on HDF proliferation, the culture was stimulated with 0.1 and 1 ng/mL KRT17. Microscopic observations indicated that the growth of HDF was not significantly different ( Fig. 4A ). The CCK8 assay revealed that stimulation with KRT17 had no effect on HDF cell proliferation ( Fig. 4B ). Further, HDF cell migration in response to KRT17 was measured using scratch wound and Transwell migration assays. Scratch wound assay showed that KRT17 inhibited HDF cell migration ( Fig. 5A and 5B ). Similarly, the results of the Transwell assay demonstrated that KRT17 stimulation significantly inhibited the migration ability of HDF ( Fig. 5C and 5D ).
Identification of KRT17-Induced Changes in Dermal Fibroblasts by RNA Sequencing–Based Transcriptome Analysis
To further assess the role of KRT17 in HDF, we performed RNA sequencing–based transcriptome analysis by measuring gene expression patterns in biological replicates. A total of 537 DEGs were screened from common 15378 genes, including 293 downregulated and 244 upregulated genes. Data were visualized using a volcano plot ( Fig. 6A ). GO analysis of DEGs was conducted according to 3 GO categories: biological processes (BP), molecular functions (MF), and cellular components (CC). The top 10 enriched GO terms in each GO category are shown ( Fig. 6B-6D ). The significantly enriched GO terms were extracellular matrix components and their regulation. The top 10 significant KEGG pathways ranked by gene count are shown ( Fig. 6E ). The pathway enriched with the highest number of genes was the PI3K-Akt signaling pathway. As the PI3K-AKT pathway is a crucial signaling pathway in cellular processes, such as proliferation and migration, the 22 target genes in the PI3K/AKT pathway were analyzed in detail ( Fig. 6F ). Eight genes were upregulated and 14 were downregulated. Of the downregulated genes, ITGA11 was a transmembrane glycoproteins that function in cell adhesion and transduction of signals involved in cell migration.
ITGA11 Expression Levels Were Decreased in HDF Cells After KRT17 Stimulation
Given the role of ITGA11 in cell migration, we next performed experiments to verify whether KRT17 stimulation alters the expression of ITGA11. In RNA-seq, decreased ITGA11 mRNA expression levels were observed in HDF following KRT17 stimulation ( Fig. 7A ). Consistent with the RNA-seq results, the qPCR results showed that ITGA11 expression was decreased ( Fig. 7B ). To further validate the above results, Western blotting and immunofluorescence staining were performed, showing decreased ITGA11 protein expression levels ( Fig. 7C-7F ).
Restoring ITGA11 Levels in the Presence of KRT17 Reversed the Cell Migration
The migration effect of restoring ITGA11 levels in the presence of KRT17 is assessed. Scratch wound assay showed that ITGA11 reversed HDF cell migration by KRT17 ( Fig. 8A and 8B ). Similarly, the results of the Transwell assay demonstrated that ITGA11 stimulation significantly reversed the migration ability of HDF ( Fig. 8C and 8D ). | Discussion
To investigate the gene expression changes and interactions of skin cells under high conditions, we established 3 major skin cell models (HEK, HDF, and HDMEC) stimulated by high glucose in the preliminary stage and we performed RNA-seq analysis on the cell samples. In this study, we first verified the upregulation of KRT17 mRNA (mainly expressed by HEK) in 3 types of skin cells under high glucose stimulation in high glucose stimulation cell models, diabetic animal models, and skin tissues of diabetic patients. Based on the hypothesis that KRT17 may play a role in diabetic skin lesions, especially on the effect of HDF, we established an in vitro KRT17-stimulated cell model of HDF, observed changes in HDF cell proliferation and migration, and performed RNA-seq analysis. KRT17 downregulated ITGA11 expression in HDF, thereby inhibiting HDF cell migration ( Fig 9 ). We suggest that KRT17, which is upregulated under diabetic pathological conditions, could be involved in delayed diabetic wound healing through the inhibition of HDF cell migration.
Although previous studies have demonstrated the regulatory role of KRT17 in various dermatological conditions such as congenital nail hypertrophy [ 23 ], multiple lipodystrophies [ 24 ], congenital alopecia [ 25 ], psoriasis [ 26 ], and acute and chronic wound healing [ 2 ], it is not known whether KRT17 is associated with delayed diabetic skin wound healing. In an RNA-seq analysis of oral mucosal ulcers and diabetic foot ulcer tissues, one investigator found that keratins associated with trauma activation (KRT6, KRT16, and KRT17) and keratins associated with cell differentiation (KRT1, KRT2, and KRT10) were upregulated in oral mucosal ulcer tissues to promote ulcer wound repair. However, all keratins were downregulated in diabetic foot chronic ulcer tissues, except for KRT17 [ 27 ]. The significant upregulation of KRT17 in chronic diabetic foot ulcer tissues suggests that KRT17 may be involved in the delayed healing of diabetic foot ulcers. However, another study showed significant downregulation of KRT17 in chronic nonhealing ulcers in RNA-seq analysis of healing ulcers vs chronic nonhealing venous ulcer tissues [ 2 ], suggesting that downregulation of KRT17 may impede the healing of chronic ulcers. These results suggest that KRT17 may play different functions in regulating wound healing under different pathological conditions, increasing the complexity of the role of KRT17 in regulating wound healing and requiring further in-depth studies.
Wound healing is a multistage and overlapping biological process that involves the coordinated cooperation of multiple cells. The directed migration of fibroblasts is an important factor in accelerating wound healing. Any cause of impaired fibroblast migration results in delayed wound healing. Early in wound healing, fibroblasts are recruited to migrate to the wound surface and secrete various cytokines in preparation for the next phase of healing [ 28 , 29 ]. Subsequently, large numbers of fibroblasts migrate directionally to the wound surface with the support of the early extracellular matrix (ECM), where they accumulate and transform into myofibroblasts, enhancing the contractility of the wound and promoting recovery [ 21 ]. In diabetic pathological conditions, stimulation of hyperglycemia and glycosylated ECM can inhibit dermal fibroblast migration [ 30-32 ].
Significant enrichment analysis of the RNA-seq data KEGG pathway after KRT17 stimulation of HDF showed that the PI3K-AKT signaling pathway was enriched to the highest number of genes. Previous studies have reported that the PI3K-AKT signaling pathway plays an important role in regulating cell migration [ 33 , 34 ]. We screened 22 differential genes in the PI3K-AKT signaling pathway and found that ITGA11, a key molecule capable of regulating cell migration, was significantly downregulated, and the downregulation of ITGA11 was verified in a cell model of KRT17-stimulated HDF at the molecular, protein, and cellular levels.
Integrins are a family of proteins that mediate cellular ECM interactions, and cell-ECM interactions are essential for fundamental biological processes, such as cell proliferation, cell differentiation, cell migration, apoptosis, morphogenesis, and organogenesis [ 35 ]. ITGA11 is a type I collagen-binding β1 integrin expressed mainly by fibroblasts [ 36 ]. Previous studies have shown that ITGA11 mediates the contraction of fibrillar collagen gels in a manner similar to ITGA2 [ 37 ], and that ITGA11 deficiency reduces the strength of granulation tissue and wounds. Further studies have revealed that ITGA11 expression is associated with myofibroblast differentiation, matrix reorganization, and collagen deposition [ 38-40 ]. Another key role of ITGA11 is the regulation of cell migration [ 41 ], especially the promotion of fibroblast migration [ 42-44 ]. Some researchers have found that ITGA11 functions by activating the PI3K-AKT signaling pathway [ 45 ]. Therefore, it is reasonable to believe that KRT17 regulates the PI3K-AKT signaling pathway by downregulating ITGA11 to inhibit HDF cell migration, which in turn is involved in delayed diabetic wound healing.
Skin stability is maintained by the self-balancing function of epidermal cells and the integrity of connective tissue, and epidermal-dermal cells can interact with each other through important signals provided by cytokines to regulate the repair of traumatized skin [ 46 ]. Interactions between keratinocytes and fibroblasts play a dominant role in the later stages of trauma healing, including the induction of fibroblast differentiation into myofibroblasts by keratinocyte paracrine secretion and promotion of fibroblast ECM secretion by keratinocytes [ 47 ]. In our study, we found that keratinocytes can regulate cell migration and collagen synthesis of HDF through KRT17, providing new insights into the mechanism of epidermal-dermal cell interactions during trauma healing.
Although our study showed that KRT17 inhibits skin fibroblast migration in vitro, the mechanism by which KRT17 acts has only been preliminarily explored and needs to be elucidated in more in-depth studies. In addition, our study lacks an animal model of diabetic skin allograft incision to further confirm and clarify the effect of KRT17 on diabetic wound healing, as well as the effect of KRT17 on other skin cells, such as skin keratin-forming cells, dermal microvascular endothelial cells, and skin inflammatory cells, which are also directions for our future research. The next step in this study is to construct a mouse model of diabetic skin allograft incision with intervention of KRT17 expression using KRT17-KO transgenic mice and/or murine tail vein injection of KRT17 neutralizing antibody and formulation of KRT17-siRNA cream to evaluate the healing of diabetic mouse skin wounds after intervention of KRT17 expression and to provide new ideas for clinical diabetic ulcer wounds. | Peng Zhou, Yiqing Li and Chao Yang contributed equally to this work.
Abstract
Objective
To investigate the effects of overexpressed keratin 17 (KRT17) on the biology of human dermal fibroblasts (HDFs) and to explore the mechanism of KRT17 in diabetic wound healing.
Methods
KRT17 expression was tested in diabetic keratinocytes, animal models, and patient skin tissues (Huazhong University of Science and Technology Ethics Committee, [2022] No. 3110). Subsequently, HDFs were stimulated with different concentrations of KRT17 in vitro. Changes in the proliferation and migration of HDFs were observed. Then, identification of KRT17-induced changes in dermal fibroblast of RNA sequencing–based transcriptome analysis was performed.
Results
KRT17 expression was upregulated under pathological conditions. In vitro stimulation of HDFs with different concentrations of KRT17 inhibited cell migration. RNA-seq data showed that enriched GO terms were extracellular matrix components and their regulation. KEGG analysis revealed that the highest number of enriched genes was PI3K-Akt, in which integrin alpha-11 (ITGA11) mRNA, a key molecule that regulates cell migration, was significantly downregulated. Decreased ITGA11 expression was observed after stimulation of HDFs with KRT17 in vitro.
Conclusion
Increased expression of KRT17 in diabetic pathological surroundings inhibits fibroblast migration by downregulating the expression of ITGA11. Thus, KRT17 may be a molecular target for the treatment of diabetic wounds. | Diabetes mellitus is one of the most common systemic chronic metabolic diseases, mainly manifested by symptoms of hyperglycemia caused by defective insulin secretion and/or insulin function, and it is a serious threat to human health worldwide [ 1 , 2 ]. Diabetes mellitus and its associated complications have a serious impact on the long-term life expectancy and quality of life of patients [ 3 , 4 ]. Patients with diabetes are prone to skin lesions, such as pruritus, necrobiosis lipoidica, scleredema adultorum of Buschke, and granuloma annulare [ 5 ]. Apart from these noninfectious skin diseases, skin ulcers, and in severe cases, diabetic foot complications, endanger the limbs and lives of patients [ 6 ]. The treatment of diabetic skin ulcers and diabetic foot is a difficult clinical problem, and exploring the mechanisms underlying their occurrence and development is important.
Human skin comprises 3 layered structures: epidermis, dermis, and hypodermis. The integrity and stability of the dermis and epidermis are the basis for the protection of the skin against invasion by foreign harmful substances [ 7 , 8 ]. The epidermis is mainly composed of keratinocytes, and the dermis is mainly composed of fibroblasts and microvascular endothelial cells. Studies have shown that the biological functions of skin keratinocytes, fibroblasts, and microvascular endothelial cells are altered by high glucose [ 9-11 ]. In our group, we performed transcriptome sequencing analysis (RNA-seq) of differential gene expression in skin keratinocytes, fibroblasts, and microvascular endothelial cells under high glucose conditions [ 12 ]. The expression was consistently upregulated in all 3 cell lines, suggesting that keratin 17 (KRT17) may play an important role in diabetic skin lesions.
KRT17 is an important member of the type I keratin family and is mainly expressed in basal cells of the epithelium, especially in keratinocytes [ 13 ]. Mutations in the KRT17 gene result in alterations in the structure of KRT17, which disrupts the integrity of the epidermis and can lead to the development of genetic disorders of the skin, such as congenital nail hypertrophy and multiple lipodystrophies. KRT17 acts as a cytoskeletal protein and can regulate a variety of biological processes, including skin cell proliferation and growth, skin inflammation, and follicular circulation [ 14 ]. Thus, KRT17 can be involved in a variety of diseases, including wound healing, activation of immune responses as an autoantigen in the development of psoriasis, and hair loss. As research continues, researchers have gradually discovered that KRT17 can be involved in regulating tumorigenesis and metastasis in epithelial cells and their derived cells [ 15 , 16 ] and can also serve as a biomarker for predicting the prognosis of a variety of tumors [ 14 ]. Thus, it is clear that KRT17, although a skeletal protein, can play a variety of biological regulatory functions in a wide range of diseases, especially those involved in skin lesions. However, through an extensive review of the literature, no relevant studies on KRT17 in diabetic skin ulcers have been conducted.
Skin wound healing is a complex biological process involving various cells and cytokines [ 17 ]. There is increasing evidence that fibroblasts proliferate, migrate, and convert to myofibroblasts during wound healing, and secrete various cytokines that play an important role in wound healing [ 18-20 ]. In the early stages of injury, fibroblasts at the wound edge begin to proliferate and migrate into the fibrin clot of the wound, producing a large number of matrix proteins, such as collagen, proteoglycans, and elastin, which are involved in the formation of granulation tissue. Subsequently, fibroblasts migrate to the wound surface and gradually transform into a profibrotic phenotype that promotes protein synthesis. Alternatively, fibroblasts can be transformed into myofibroblasts, which are involved in trauma contraction [ 21 ].
Based on the group's previous findings on the upregulation of KRT17 mRNA in 3 types of skin cells (human epidermal keratinocyte [HEK], human dermal fibroblast [HDF], and human dermal microvascular endothelial cell [HDMEC]) under high glucose conditions, we first validated it in diabetic cell models, animal models, and patient skin tissues, and then established an in vitro KRT17-stimulated HDF cell model to explore the role of KRT17 in diabetic skin wound healing by observing the effects of KRT17 on HDF cell proliferation and migration. | Funding
This project was supported by Grant No. 82270520 of the National Natural Science Foundation of China.
Author Contributions
L.Q. and Y.C. contributed substantially to the design of the study. Z.P. and L.Y.Q. contributed substantially to study completion. C.D.X., G.R.K., and Z.S. collected the data. Z.P. and L.Y.Q. analyzed and interpreted the data and wrote the initial draft of the manuscript. L.Q. and Y.C. revised the manuscript for important intellectual content. All the authors have read and approved the final manuscript.
Disclosures
The authors declare that they have no known competing financial interests or personal relationships that could have influenced the work reported in this study.
Data Availability
All raw data and materials are available upon reasonable request from the corresponding authors.
Ethical Approval
This animal study was approved by the Huazhong University of Science and Technology Ethics Committee ([2022] IACUC Number:3110). This study was approved by the Wuhan Union Hospital Ethics Committee ([2022] No. 0298).
Abbreviations
differentially expressed gene
enzyme-linked immunosorbent assay
human dermal fibroblast
human dermal microvascular endothelial cell
human immortalized keratinocyte
human epidermal keratinocyte
integrin alpha-11
keratin 17
phosphate-buffered saline
quantitative polymerase chain reaction
streptozotocin | CC BY | no | 2024-01-16 23:36:47 | J Endocr Soc. 2024 Jan 2; 8(2):bvad176 | oa_package/77/b4/PMC10776312.tar.gz |
||
PMC10777671 | 38205164 | Patients and Methods
A retrospective review was conducted on the clinical records of all boys with BUDT who had undergone evaluation over a 14-year period from January 1, 2008, to December 31, 2021, by the endocrine service at the Royal Hospital for Children, Glasgow. Boys were selected by searching the hospital's operating theater database, which lists all procedures undertaken for all orchidopexies undertaken within the desired time period. An undescended testis was defined by its presence out with the scrotum, and the location of the testis was determined based on its site after induction of anesthesia. For the purposes of analysis, testes were classified as abdominal if either testis was identified in the abdomen, regardless of an inguinal position of the contralateral testis. Boys with unilateral undescended testis, retractile testes, or anorchia were excluded from the study. The external genitalia were described using the External Masculinization Score (EMS) [ 12 ]. Data collection was undertaken in accordance with the ethics guidance of the National Health Service Health Research Authority decision tool ( https://www.hra-decisiontools.org.uk/research/ ) as an evaluation of the current clinical service provision.
Stimulation Tests and Assays
The protocol for human chorionic gonadotrophin (hCG) stimulation included hCG at a dose of 1500 U IM on days 1, 2, and 3 in the first week of the test. A blood sample for testosterone (T) was collected on days 1 and 4 of the test. In some cases, a prolonged hCG stimulation test was conducted, where further injections were administered twice a week for 2 further weeks following which a blood sample was collected on day 22 [ 13 ]. A normal T response to hCG stimulation test was defined as per previously published studies where adequate T response would be greater than 3.5 nmol/L at day 4 of the test and greater than 9.5 nmol/L on day 22 [ 13 ].
For samples from 2008 to 2013, serum T was measured using a chemiluminescent microparticle immunoassay on the Abbott Architect analyser (RRID AB_2848165, Abbott Laboratories Diagnostics, Santa Clara, CA, USA) after solvent extraction. Functional sensitivity was 0.5 nmol/L and the inter- and intra-assay coefficients of variation (CVs) were <8%. For samples after 2013, T was measured using liquid chromatography with tandem mass spectrometry via the Xevo TQS Tandem Mass Spectrometer (Waters Corporation, Milford, MA, USA) with a functional sensitivity of 0.1 nmol/L. Steroids were extracted from serum/plasma using Biotage supported liquid extraction, automated on the CTC PAL (MicroLiter Analytical Supplies Inc, Suwanne, GA, USA), followed by ultra-performance liquid chromatographic separation. The inter- and intra-assay CVs were also <8%.
For samples between 2008 and 2010, anti-Müllerian hormone (AMH) was measured using an enzymatically amplified 2-site immunoassay (Active MIS/AMH Elisa DSL-10_14400, Diagnostics Systems Laboratories, Webster, TX, USA), with a functional sensitivity of 4 pmol/L and intra- and inter- assay CVs of 5% and 8%, respectively [ 14 ]. For samples between 2010 and 2016, AMH was measured using a semi-automated Beckman Gen II ELISA assay (RRID AB_2923005, Beckman Coulter, Indianopolis, IN, USA) with a functional sensitivity of 4 pmol/L and inter- and intra-assay CVs of <5%. For samples after 2016, AMH was measured using enzyme-linked immunosorbent assay using the fully automated Beckman Access MDL assay (RRID AB_2892998, Beckman Coulter), with a functional sensitivity of 1 pmol/L and inter- and intra-assay CVs of <5%. Age-related AMH reference ranges were used to create centiles, and a low AMH was defined as below the 5th centile for age [ 13 , 14 ].
The LH releasing hormone (LHRH) stimulation test included collection of blood for LH and FSH followed by administration of LHRH 100 micrograms IV and additional collection of blood for LH and FSH at 30 and 60 minutes [ 15 ]. Hypogonadotrophic hypogonadism was defined as a lack of response to LHRH stimulation (no increase from basal level). Gonadal failure was suspected in those cases where basal LH and FSH were high or their responses to LHRH were elevated (>10mU/L). LH and FSH were measured on the Abbott Architect ci1600 using chemiluminescent microparticle immunoassays (RRIDs AB_2813909 and AB_2813910, Abbott Laboratories Diagnostics, Santa Clara, CA, USA), with a functional sensitivity of 0.1 IU/L and inter- and intra-assay CVs of <5% for both.
Genetic Analysis
DNA extraction from peripheral blood samples was performed as per clinical genetic testing standards for patients undergoing routine testing. Single nucleotide polymorphism microarray was performed using Illumina CytoCNP 850k v1.2 BeadChip. Data were analysed with BlueFuse Multi v4.4, BeadArray v2 algorithm. Analysis parameters were set to detect copy number variations ≥ 10 kb within OMIM morbid/Developmental Disorders Genotype-Phenotype Database genes, losses ≥ 200 kb or gains ≥ 500 kb. Any abnormal findings were compared with data held in the DECIPHER database ( https://decipher.sanger.ac.uk/ ). Where appropriate, Differences of Sex Development (DSD) panel testing was performed using a dedicated 56-gene panel as previously described [ 13 ].
Statistical Analysis
The data were analysed using PRISM, version 8.1.2. Continuous variables were analyzed using Mann–Whitney U and noncontinuous variables using Fisher’s exact. The Kruskal–Wallis test was used when more than 2 groups were compared. Multiple linear regression was used to identify associations between groups. A P < .05 was considered statistically significant. | Results
Characteristics of the Study Population
A total of 243 bilateral orchidopexies were performed at RHCG between 2008 and 2021. Of these, 130 (53%) boys were seen by the endocrine team ( Table 1 ). There was no statistically significant difference in those who were referred to the endocrine team compared to those who were not in terms of EMS ( P = .07); however, those who were referred were more likely to have additional genital features (25% vs 5%, P < .0001) and presence of extragenital features (55% vs 9%, P < .0001). Those who were referred to endocrinology also had a greater re-do rate (12% vs 5%).
Of the 130 who were seen by endocrinology, 99 (76%) had bilateral inguinal testes, 23 (18%) had bilateral impalpable testes with the testes located in the abdomen, and 8 (6%) had a combination of an impalpable abdominal testis and an inguinal testis. All 130 cases had bilateral orchidopexy with a median (range) age at first orchidopexy of 1 year (0.2, 18.0). A total of 16 (12%) required re-do orchidopexy, and 6 (5%) of the boys progressed to orchidectomy due to atrophied testes. The median EMS of the group was 10 (2, 11), and 71 (55%) had extragenital anomalies, most commonly neurodevelopmental delay (n = 10, 14%). There was a family history of undescended testes in 14 (11%). In total, 33 (25%) boys had “complex” BUDT with additional genital features. Of these, 21 (16%) boys had coexisting hypospadias [8 (38%) had distal hypospadias and 13 (62%) had proximal hypospadias] and 18 (14%) boys had associated micropenis. Overall, 67 (52%) had ultrasound imaging prior to initial orchidopexy, while only 2 (1%) had magnetic resonance imaging.
Genetic Evaluation
Genetic analysis was performed in 70 (54%) of the boys with 42 (60%) undergoing the local targeted genetic panel for DSD and 28 (43%) undergoing single nucleotide polymorphism microarray. Of the 70, a genetic alteration was detected in 38 (54%), and, of these, 14 (37%) had complex BUDT. In 16 (40%), this was manifested as a chromosomal variant. These variants are listed in Table 2 . Of the further 22 with a genetic alteration identified on genetic testing, 10 (14%) had a SRD5A2 variant, 8 (12%) had a variant in a gene consistent with hypogonadotrophic hypogonadism [most commonly CHD7 in 4 (6%)], 2 (3%) had an AMHR variant, 1 (1%) had an AR variant, and 1 (1%) had a POLD1 variant. Details of these variants are listed in Table 3 . All of the boys who had genetic testing also had some form of endocrine evaluation.
Endocrine Evaluation
Of the 130, 99 (76%) had endocrine biochemistry performed, with a median current age of those who had testing of 6 (4, 8) years compared to 8 (2, 20) years in those with no biochemistry performed, giving a median follow-up time of 2 (1, 18) years. Of those who had testing, the median EMS was 9 (5, 11) compared to 11 (6, 11) in those with no endocrine evaluation. In total, 87 (88%) boys had AMH measured, 82 (83%) had an hCG stimulation test, and 60 (61%) had an LHRH stimulation test. The median (range) age at endocrine evaluation was 1 year (0.1, 17.0). In 61 of the 99 cases (62%), endocrine biochemistry was performed after initial orchidopexy with a range from 1 month prior to surgery to 10.5 years after initial surgery. In comparison to those who had no endocrine biochemistry performed, EMS predicted the likelihood of performing any biochemical tests, whereas original position of testes ( P = .06), presence of additional comorbidities ( P = .9), and family history ( P = .1) did not show any association.
Overall, some evidence of endocrine dysfunction was identified in 38 (38%) of the boys who had endocrine testing ( Fig. 1 ). Of these, 6 (16%) had complex BUDT. In the 87 boys who had an AMH performed, this was within the normal range in 56 (64%) at median (range) 574 pmol/L (158, 1614); in 4 (5%) the level was above the normal range at median (range) 1778 pmol/L (962, 2477) and low in 27 (31%) at a median of 41 pmol/L (range <4, 262). Basal gonadotrophins were performed in 86 (66%), and LHRH testing was performed in 60 (46%). FSH was raised in 19 (22%) with a median (range) of 22.6 U/L (11.9, 109.2), and LH was raised in 35 (41%) at a median (range) of 19.7 U/L (10.4, 95.1). A total of 82 boys had an hCG stimulation test, with 52 (63%) of these boys undertaking a prolonged test. Twenty (24%) of the boys had inadequate T response on D4 of the hCG stimulation test with a median (range) D4 T of 0.5 nmol/L (<0.5, 2.5). Sixteen boys (31%) had inadequate T response on D22 of the hCG stimulation with a median D22 T of 1.6 nmol/L (<0.5, 9.2), half of whom had an adequate T response on D4 of the test. Basal T was within the normal range in 3/7 (43%) of the boys receiving first assessment at pubertal age.
Of the 99 boys who had endocrine biochemistry performed, there were no differences in AMH, LH, FSH, or T levels between boys with isolated BUDT and those with complex BUDT ( Fig. 2 ). Boys with complex BUDT were more likely to have a genetic variant identified (41% vs 18%, P = .01).
A total of 53 (54%) had complete endocrine evaluation of gonadal function including all 3 tests (AMH, LHRH stimulation, and hCG stimulation). Overall, evidence of gonadal dysfunction was identified in 23 (43%) boys. Of these, the most common abnormality identified was a low AMH in 26 (20%). There were no differences between those with normal and abnormal gonadal dysfunction according to position of testes ( P = .9) ( Fig 3 ), presence of additional comorbidities ( P = .09), family history ( P = .9), or EMS ( P = .9). In addition, there were no differences in AMH, T, LH, or number of endocrinopathies in those who required a re-do operation compared to those who did not, although basal FSH was higher in those who did require a re-do (1.6 U/L vs 0.9 U/L, P = .02) ( Fig. 4 ) and a genetic variant was not more likely to predict the need for a re-do operation. Of the 47 (48%) boys who had some repeat biochemistry taken, none demonstrated any improvement in gonadal function.
Long-term Outcomes
At the time of the study, of the 130 boys who had been seen by endocrinology, 51 (39%) were over the age of 12 years with a median age of 14.8 yrs (12.0, 24.4). Of these 51, 7 (14%) had required therapy. There were no reported cases of testicular germ cell tumors, and only 3 boys have had sperm analysis undertaken, with this demonstrating azoospermia in 2 (66%). Both of these boys had abdominal testes and evidence of gonadal dysfunction on biochemistry. | Discussion
This study considers the evaluation of boys with BUDT from a large tertiary pediatric center within the United Kingdom over a 13-year period. We demonstrate variation in the extent to which these children are investigated even within a single center, without any clear clinical predictors regarding this variation. This highlights the importance of a systematic approach to the investigation of such boys, given the high rates of coexisting genetic variants (57%), extragenital abnormalities (55%), and endocrine biochemical dysfunction (38%). Although EMS seemed to be influential in the decision to undertake endocrine biochemistry, there was no association between EMS and likelihood of abnormal biochemistry, which is consistent with findings in a broader range of DSD conditions within the same center [ 16 ]. Indeed, given that no clear predictors for the existence of gonadal dysfunction could be identified suggests that, first, further research regarding the mechanisms of gonadal dysfunction in this group is required and, second, guidelines regarding the management of boys born with BUDT should recommend testing for all such individuals. In particular, the presence of extragenital features cannot be relied upon to determine the need for additional testing, as the majority of those with endocrine dysfunction did not have any additional genital features on examination.
In those boys in whom genetic testing via targeted DSD gene panel or chromosomal microarray was undertaken, over half had a genetic variant identified. All of the chromosomal variants identified have previously been reported to be associated with undescended testes [ 4 , 17-29 ]. It is possible that detection rates may vary also depending on the genetic testing modality employed. This is without the addition of whole exome analysis, which has been shown to increase diagnostic yield for genetic variants to 64% of males with XY DSD [ 30 ]. As such, boys with BUDT should be offered chromosomal analysis as well as more detailed molecular genetic testing, particularly in those with complex BUDT, where a variant was most likely to be identified.
Our data demonstrate that one-third of boys with BUDT have inadequate T in response to hCG stimulation tests. Given that half of the boys with low T on prolonged hCG testing had a normal D4 testosterone response, the endocrine evaluation of boys with BUDT should include prolonged hCG testing in preference to standard short hCG stimulation tests. Our findings support previous reports demonstrating that while a short hCG stimulation regimen may exclude some conditions such as 17β-hydroxysteroid dehydrogenase-3 deficiency, some boys with BUDT require prolonged stimulation to thoroughly assess the ability of the testes to produce androgens [ 31 ]. In addition, while some studies have demonstrated variable success rates in terms of using hCG to promote testicular descent, recurrence is high using this method, likely due to coexisting anatomical abnormalities in boys with BUDT, most commonly due to abnormal attachment of the gubernaculum [ 32 ]. However, hCG testing does provide invaluable information regarding Leydig cell function, which can be used to construct a management plan and counsel boys and their parents regarding likely long-term outcomes. That said, where children are referred and seen around the ages of 3 to 6 months when minipuberty may be occurring, basal T and gonadotrophin levels may prove useful in understanding the function of the testes at that early stage of life. While it is possible that normal results during minipuberty may exclude hypogonadotrophic hypogonadism, it is unclear whether they would obviate the need for further endocrine monitoring for incipient primary gonadal insufficiency.
While this study clearly provides meaningful data on a large number of children with BUDT from a single tertiary center, the retrospective nature of the study means that all affected cases did not undergo a detailed endocrine and genetic evaluation, likely due to different clinical care providers throughout the study. As such, there is a need for examining a larger cohort of cases and for determining if any particular factors raise particular clinical suspicion and alter the decision to refer to endocrinology or to do genetics. It is also possible that the timing of the endocrine evaluation in relation to the age of the child or the orchidopexy may bear a relationship to the results, and this will also require an analysis of a larger number of cases, ideally via a prospective study. The high prevalence of endocrine and genetic findings also highlights the need for ongoing surveillance of long-term outcomes including gonadal function, sex hormone supplementation, fertility, and tumor development through a multidisciplinary team [ 33 ]. Of the children who had reached pubertal age at the time of the study, 10% required T supplementation for induction of puberty. It is possible, however, that testicular insufficiency may develop later in life, and, as such, boys with a past history of BUDT should be made aware of the clinical features of hypogonadism in adulthood. The findings in the current study are sufficient for us to recommend that all boys with BUDT follow a standardized pathway for evaluation ( Fig. 5 ) and that this should be similar to what has previously been proposed for differences and disorders of sex development [ 34 ]. However, further research is required to evaluate the clinical utility and the long-term benefit to the growing child and adolescent.
In conclusion, gonadal dysfunction is identified in over a third of boys with BUDT. Given there were no clear predictors for development of gonadal dysfunction, we recommend biochemical and genetic analysis for all boys with BUDT. | Abstract
Background
Bilateral undescended testes (BUDT) may be a marker of an underlying condition that affects sex development or maturation.
Aims
To describe the extent of gonadal dysfunction in cases of BUDT who had systematic endocrine and genetic evaluation at a single tertiary pediatric center.
Methods
A retrospective review was conducted of all boys with BUDT who had endocrine evaluation between 2008 and 2021 at the Royal Hospital for Children, Glasgow (RHCG). Continuous variables were analyzed using Mann–Whitney U and non-continuous variables using Fisher’s exact, via Graphpad Prism v 8.0. Multivariable logistic regression was used to identify any associations between groups. A P < .05 was considered statistically significant.
Results
A total of 243 bilateral orchidopexies were performed at RHCG between 2008 and 2021. Of these 130 (53%) boys were seen by the endocrine team. The median (range) age at first orchidopexy was 1 year (0.2, 18.0) with 16 (12%) requiring re-do orchidopexy. The median External Masculinization Score of the group was 10 (2, 11) with 33 (25%) having additional genital features. Of the 130 boys, 71 (55%) had extragenital anomalies. Of the 70 who were tested, a genetic abnormality was detected in 38 (54%), most commonly a chromosomal variant in 16 (40%). Of the 100 who were tested, endocrine dysfunction was identified in 38 (38%).
Conclusion
Genetic findings and evidence of gonadal dysfunction are common in boys who are investigated secondary to presentation with BUDT. Endocrine and genetic evaluation should be part of routine clinical management of all cases of BUDT. | Undescended testis is a common congenital anomaly with a birth prevalence of between 1.5% and 8% of newborns [ 1 ]. The majority will have a unilateral undescended testis, and only 1/10 of these newborns will present with bilateral undescended testes (BUDT) [ 2 ]. Undescended testes, especially when they are bilateral or when they are associated with other genital anomalies such as micropenis or hypospadias, may be a marker for a genetic condition that affects sex development such as hypogonadotrophic hypogonadism [ 3 ] or a disorder of gonadal development, androgen synthesis, or androgen action [ 4 ]. In addition to being a marker of an underlying condition, undescended testes themselves may have an impact on long-term gonadal outcome including pubertal development [ 5 ], fertility [ 6 ], and tumor development [ 7 ]. While recent guidance recommends that boys with BUDT should undergo expert evaluation [ 8 ], and there is evidence that boys with undescended testes may display some evidence of gonadal dysfunction [ 9-11 ], there is little consensus on the extent of the evaluation required in these cases.
The aim of the current study was to describe the spectrum of abnormalities encountered in boys undergoing comprehensive endocrine evaluation at a single tertiary pediatric center to provide a deeper insight into the prevalence of endocrine abnormalities in these cases and consider the rationale for investigating this group of boys. | Acknowledgments
A.K.L.H. is funded by an NHS Education for Scotland/Chief Scientist Office Clinical Lectureship PCL/21/05.
Disclosures
The authors have nothing to disclose.
Data Availability
Some or all datasets generated during and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request. | CC BY | no | 2024-01-16 23:36:47 | J Endocr Soc. 2023 Dec 12; 8(2):bvad153 | oa_package/b9/75/PMC10777671.tar.gz |
||
PMC10781692 | 38199983 | Introduction
Suicide risk in patients with major depressive disorder (MDD) is 20 times more than that of the general population [ 1 , 2 ], and suicidal behavior exists at all times during major depressive episodes [ 3 ]. The lifetime prevalence of suicide attempts in patients with MDD is 31% [ 4 ]. and more than half of patients experience suicidal ideation beforehand [ 5 ]. Suicide-related costs account for about 5% of the total incremental costs of MDD adults [ 6 ], representing a substantial burden to patients and their families.
The general treatments for alleviating suicidal ideation include various antidepressants [ 7 – 9 ], lithium [ 10 , 11 ], ketamine [ 12 , 13 ], electric convulsive therapy (ECT) [ 14 , 15 ], and cognitive behavioral therapy (CBT) [ 16 ]. However, the antidepressants/lithium and psychotherapy usually require weeks to exert anti-suicide effects. Furthermore, antidepressants may increase suicide risk in children and adolescents, as well as adults in early-phase pharmacotherapy [ 17 , 18 ]. ECT is an effective way to rapidly relieve suicidal ideation, but the tolerability and complex side effects limit its application [ 19 ]. Evidence also suggests that ketamine may be a promising rapid-acting option, but its effects seem to be short-lived [ 12 ]. The problems of current treatments motivate the search for safe and rapid relief interventions for suicidal ideation in patients with MDD.
Recent evidence suggests that a non-invasive treatment option, i.e., repetitive transcranial magnetic stimulation (rTMS), may be a rapid and safe way in relieving both depression and suicidal ideation. As recommended by the “Evidence-based guidelines on the therapeutic use of repetitive transcranial magnetic stimulation (rTMS): An update (2014-2018)”, level A evidence (definite efficacy) is proposed for high-frequency (HF)-rTMS on the left dorsal lateral prefrontal cortex (DLPFC) in MDD [ 20 ]. Actually, MRI-navigated rTMS has shown high efficacy and rapid action in the treatment of depression. Recently, an individualized accelerated, high-dose intermittent theta-burst stimulation (iTBS) protocol, i.e., Stanford Neuromodulation Therapy (SAINT), was proposed recently by Williams et al. [ 21 ]. The safety, effectiveness, and rapid action of this protocol have been validated with both open-label and double-blind studies. Treatment with five days has shown a high response rate of 85.7% and remission rate of 78.6% for treatment-resistant depression [ 22 ]. SAINT has been approved by FDA as an effective way for the treatment of refractory depression. Notably, it has also been shown to be a potential way to rapidly reduce the severity of suicidal ideation [ 23 ].
Although SAINT appears to be highly effective, the neural mechanisms underwriting its rapid-acting antidepressant and suicide prevention effects remain unclear. The brain is a complex network comprising functionally specialized regions that flexibly interact to support a diverse repertoire of cognitive and behavioral functions [ 24 , 25 ]. Characterizing the brain’s connectivity, which constitutes a functional connectome “fingerprint” [ 26 ], may help to elucidate the neural mechanisms supporting the rapid-acting effects of SAINT. Indeed, accumulating evidence shows that the therapeutic efficacy of rTMS might be closely associated with the functional connectivity of its stimulation target on the DLPFC with, the subgenual anterior cingulate cortex (sgACC) [ 27 – 31 ]. However, the modulatory effects of rTMS are not only restricted to the DLPFC-sgACC connectivity, but also manifest in distributed brain networks associated with depression, such as the default mode network (DMN) [ 32 , 33 ], affective network (AN) [ 34 ], salience network (SN) [ 35 ], reward network (RN) [ 36 ], and visual network (VN) [ 35 ]. How does the therapeutic intervention transmit from the stimulation target to distributed networks? Could the neural pathways conveying rTMS stimulation account for its rapid-acting antidepressant and suicide prevention effects?
To address above-mentioned issues, we collected functional magnetic resonance imaging (fMRI) images in 32 MDD patients with suicidal ideation before and immediately after 5-day SAINT. We investigated the information flow from the rTMS target to core regions associated with depression and suicide ideation using effective connectivity analysis based on dynamic causal modelling (DCM). Effective connectivity differs from conventional functional connectivity simply computing the correlation among time courses of interacting regions. Instead, it could infer the causal influences from one region to another and depict the signal flow directions within a brain network. Our results showed that the rapid-acting antidepressant effects of SAINT were related to effective connections of the sgACC, while the suicide prevention effects were more associated with the effective connectivity of the insula (INS). | Methods
Participants
The study was approved by the Ethics Committee of the First Affiliated Hospital, Fourth Military Medical University, and was conducted in accordance with the Declaration of Helsinki (clinicaltrial.gov identifier: NCT04653337). Written informed consents were obtained from all the participants.
All patients were recruited from the Department of Psychiatry at the First Affiliated Hospital, Fourth Military Medical University, from January 2021 to October 2021, according to the following criteria: (i) 18–60 years old; (ii) meeting the criteria of the Diagnostic and Statistical Manual of Mental Disorder, Fifth Edition (DSM-5) for patients with unipolar MDD assessed by Mini-Neuropsychiatric Interview (MINI); (iii) right handedness; (iv) with a score > 17 on the 17-item Hamilton Depression Rating Scale (HAMD-17) [ 37 ]; (v) with a score ≥ 6 on the Beck Scale for Suicidal ideation-Chinese Version (BSI-CV) [ 5 , 38 ]; (vi) normal results on physical examination and electroencephalography. We excluded those patients with (i) received antidepressant treatment 2 months prior to the study; (ii) any other current or past psychiatric axis-I or axis-II disorders; (iii) severe physical illnesses; (iv) psychotic symptoms, alcohol or drug abuse; (v) a history of neurological disorders including seizure, cerebral trauma, or MRI evidence of structural brain abnormalities; (vi) contraindications to MRI and rTMS, such as metallic implants in the body, cardiac pacemakers, claustrophobia, etc.; (vii) acute suicide or self-injury behavior in need of immediate intervention; (viii) pregnancy, lactation, or a planned pregnancy for females.
Thirty-four participants were enrolled in this study. Two patients withdrew from the study due to personal reasons after the first day of treatment. For ethical and safety reasons, venlafaxine (75 mg/d) or duloxetine (30 mg/d) were prescribed at the beginning of the treatment. Dexzopiclone or zolpidem was also used to improve the sleep quality of individuals who suffered from severe insomnia. Figure 1 describes the workflow of the study, and the demographic characteristics of the patients are provided in the supplementary Table 1 .
Clinical assessments
Suicidal ideation and depression symptoms were assessed by clinical and self-report scales at baseline, immediately after SAINT (after the last session of SAINT), 2 and 4 weeks after the whole SAINT. The severity of suicidal ideation was measured by BSI-CV, item 3 of the HAMD-17, and item 10 of the Montgomery-Asberg Depression Rating Scale (MADRS). Depression symptoms were assessed with HAMD-17 and MADRS. At the end of each day’s treatment, 6-item HAMD (HAMD-6) was also used to assess the depression symptoms. Potential neurocognitive side effects were assessed using a neuropsychological test battery before and immediately after SAINT, including Perceived Deficits Questionnaire-Depression (PDQ-D) [ 39 ], Digital Span Test (DST) [ 40 ], and Digit Symbol Substitution Test (DSST) [ 41 ].
BSI-CV scores were the main (clinical) outcomes of the study. The suicidal ideation response was defined as a reduction ≥ 50% on the BSI-CV, while the remission of suicidal ideation was defined as a reduction ≥ 50% and < 6 on the BSI-CV. Response to depression symptoms was defined as a reduction ≥ 50% on the HAMD-17, MADRS, and HAMD-6 scales. The remission of depression symptoms was defined as a score < 8 on the HAMD-17 [ 42 ], a score < 11 on the MADRS [ 43 ], a score < 5 on the HAMD-6 [ 44 ], and a score < 13 on the BDI [ 45 ]. All statistical analyses of clinical data were conducted using SPSS, version 26 (IBM, Armonk, N.Y.). The level of statistical significance was set at p = 0.05. As one patient failed to participate the clinical assessment 4 weeks after SAINT, the mean value of all participants’ clinical score at that time point was used to replace missing data. Changes in BSI-CV, HAMD-17, HAMD-6, MADRS scores were assessed with repeated measures ANOVA, while changes in PDQ-D, DST, DSST were evaluated with paired t tests. The relevant results are displayed in Table 1 and Fig. 2 .
Procedures of MRI-navigated rTMS
The MRI-navigated rTMS treatment was delivered by a Black Dolphin Navigation Robot system (SmarPhin S-50, Solide Brain Control Medical Technology Co., Ltd., Xi’an, China). The individualized rTMS stimulation target is defined as the peak subunit on the DLPFC that was mostly negatively connected to the sgACC according to Cole et al. [ 23 ]. Whereas the definition of the sgACC was slightly different from that of Cole et al. [ 23 ]. In the current study, No. 187 and 188 atlases based on Brainnetome Atlas (BNA) ( https://atlas.brainnetome.org/bnatlas.html ) [ 46 ] were selected as the sgACC to improve the signal noise ratio and avoid mixing information comes from the corpus callosum. After the definition of the individualized stimulation target, 5-day sgACC FC-guided rTMS treatment, i.e., SAINT, was given for each patient [ 22 , 23 ]. Specifically, three consecutive iTBS were delivered at 90% of the resting motor threshold (RMT) for each session in 9 min 52 s. Ten sessions of iTBS (18,000 pulses), with a 50-min interval of each session, were delivered to the subject every day. The whole treatment lasted for 5 consecutive days and 90,000 pulses in total were received by each patient.
Image acquisition
High-resolution MRI data were acquired on a 3.0 T UNITED 770 scanner before and after treatment. Parameters for 3D-T1-weighted structural imaging were: slices = 192, repetition time = 7.24 ms, echo time = 3.10 ms, slice thickness = 1.0 mm, matrix size= 512 × 512, field of view = 256 × 256 mm 2 , flip angle = 10°. Parameters for resting-state fMRI with eye-closed were: slices =35, repetition time = 2000 ms, echo time = 30 ms, slice thickness = 4 mm, matrix size = 64 × 64, field of view = 224 × 224 mm 2 , flip angle = 90°. The pre-treatment (i.e., baseline) and post-treatment resting-state fMRI sessions lasted for about 12 minutes.
Data preprocessing
The MRI data were preprocessed with the statistical parametric mapping software package (SPM12, http://www.fil.ion.ucl.ac.uk/spm/software/spm12/ ) and the GRETNA toolbox ( https://www.nitrc.org/projects/gretna/ ). After discarding the first 10 images due to magnetic field instability, slice timing correction was performed to correct differences in the acquisition time of slices within a volume. Next, realignment was used to correct head motion, and two subjects with translation larger than 3.5 mm or rotation larger than 3.5° were excluded. The images were then normalized to standard Montreal Neurological Institute (MNI) space and spatially smoothed with a Gaussian kernel filter (full width at half maximum (FWHM) = 6 mm). Temporal detrending was used to deal with low-frequency signal drift. Covariates including Friston-24 head motion parameters, signals from white matter, and CSF, were then regressed out. Furthermore, we performed global signal regression to remove spatially coherent confounds [ 27 , 29 , 47 ]. Finally, the fMRI time series were temporally filtered with a bandpass filtering (0.01–0.1 Hz) for functional connectivity analyses. Functional connectivity (i.e., correlation) analyses were used to identify regions of interest (ROIs) for subsequent effective connectivity (i.e., DCM) modelling.
It should be noted that only 28 of the 32 patients completed both the pretreatment and posttreatment MRI scanning. Among them, as 2 patients were excluded because of large head motion, 26 subjects were finally entered into the subsequent fMRI analyses.
Functional connectivity profiles of the stimulation target
Functional connectivity with the individualized stimulation targets was investigated first. In detail, a 6 mm radius spheric ROI centered at the DLPFC target’s MNI coordinates (Supplementary Table 2 ) for each participant was defined as the seed region. The Pearson correlations between the seed time series and time series of every voxel across the whole brain were then calculated, i.e., target-based functional connectivity (i.e., r map). To ensure the Gaussian distribution of residuals of the ensuing parametric tests, all r values were Fisher’s Z transformed. One-sample t -tests were then conducted on the transformed z maps and two signed maps of target-based functional connectivity were obtained, i.e., a positive functional connectivity map and a negative functional connectivity map (Fig. 3 ). Figure 3a represents regions that negatively correlated with DLPFC targets, while Fig. 3b shows regions that displayed positive correlations with the DLPFC targets ( p < 0.05, uncorrected).
Stimulation target-based effective connectivity analysis
Effective connectivity analysis was confined to the left hemisphere regions. Thus, the left caudate (CAU), precuneus (PCUN), hippocampus (HIP), insular (INS) were included in the analysis (Fig. 3c, d ). For midline regions, including the midline PFC (mPFC) and sgACC (Fig. 3c, d ), the mean time courses of the bilateral cluster were extracted. For each brain region, a binary mask was generated according to the functional connectivity maps and the Brainnetome Atlas (BNA) [ 46 ] ( https://atlas.brainnetome.org/bnatlas.html ) (Supplementary Table 2 , Fig. 3a–d ). The mean time course of voxels within each mask was then extracted.
Furthermore, to validate whether the depression functional circuit map of our study is consistent with the convergent network proposed by Siddiqi et al. [ 48 ], partial correlations between target-based connectivity and depression score changes (including the changes of HAMD-17 and MADRS) with regressing out the baseline depression scores were conducted (Supplementary Fig. 1 ).
Two DCMs were constructed based on ROIs showing positive and negative correlations with the target ROI. In brief, ROIs from the negative functional connectivity map were combined with the target (seed) ROI to create a fully inter-connected dynamic causal model, named the negative correlation effective connectivity model (NCECM), while the ROIs from the positive functional connectivity map were used to construct the corresponding positive correlation effective connectivity model (PCECM). Directed (i.e., causal) effective connectivity within the NCECM and PCECM were estimated using spectral dynamic causal modeling (spDCM) [ 49 ] as follows.
Effective connectivity analysis with spDCM
The causal interactions among ROIs were modeled with random differential equations for the hidden neuronal states [ 49 ]:
Here, x ( t )=[ x 1 ( t ) x 2 ( t ) ... x n ( t )] T denotes the hidden neuronal states that represent neuronal activity of the n interacting ROIs. A represents the effective connectivity characterizing the strength of directed connections among these ROIs, while v ( t ) models endogenous fluctuations, with a parameterized spectral profile. The neuronal model is then supplemented with standard hemodynamic state equations that model the translation from unobserved neuronal activity to observed BOLD signals from the ROIs [ 50 ]. The model was then inverted, and model parameters were estimated in the frequency domain by fitting complex cross spectra through a Variational Laplace procedure [ 49 ]. These (spectral) data features were evaluated prior to the temporal filtering used to identify ultra-slow functional connectivity.
For each subject, a fully connected model with reciprocal connections between all pairs of ROIs was first defined for NCECM and PCECM, respectively. Each fully connected model was then optimized to maximize model evidence (as scored by variational free energy). The posterior probability of the parameters of this fully connected model — from each subject — was then entered into a second-level group analysis. The parametric empirical Bayes (PEB) framework [ 51 , 52 ] was used to obtain second-level (i.e., between subject and session) commonalities and differences in effective connectivity with a General Linear Model (GLM). The advantage of the PEB framework over classical statistics is that both the posterior expectations and covariance of the parameters are considered when estimating effects at the group level.
In summary, group-level effects were modeled with the following hierarchical model according to [ 52 ]:
Here, Y i represents the observed BOLD data features of subject i . At first level, Y i is modeled with a DCM with parameters , a GLM of confounding (and nuisance) effects with design matrix and parameter , and observation noise . At the second level, DCM parameters are modeled with a second GLM with design matrix X and group-level parameters which parameterize commonalities and differences in effective connectivity over subjects. The second level GLM included a constant term modelling group means (i.e., commonalities) and differences due to (i) pre-and post- treatment effects, response in terms of (ii) suicidal ideation and (iii) depression (see Fig. 1c —middle panel). models random between-subject effects that are not modelled by the GLM. The second-level parameters are assumed to have a prior expectation η and residuals . To optimize the ensuing PEB model, Bayesian Model Reduction (BMR) [ 51 ] was used to search over all reduced PEB models. Finally, Bayesian Model Averaging (BMA) was employed to summarize connectivity over all plausible (reduced) PEB models.
The ensuing Bayesian model averages of effective connectivity at the second level were used to identify commonalities (i.e., group means) that describe the functional architecture that was conserved over subjects and sessions. The Bayesian model averages of effective connectivity at the first level were used to test for correlations with clinical scores. These Bayesian model averages represent the most efficient estimates of connectivity because they inherit constraints from the second-level GLM.
Correlations between fMRI connections and clinical scores
To explore whether (functional, and effective) connectivity estimates could predict rTMS treatment effects—in mitigating depression and suicidal ideation symptoms—we calculated the correlations between connectivity estimates and depression (i.e., HAMD-17, MADRS) and suicidal ideation score changes (i.e., BSI-CV), respectively. Furthermore, the correlations between the change percentage in connections and clinical scores (i.e., HAMD-17, MADRS, and BSI-CV) were conducted for exploring the rTMS treatment effect. The post-treatment connectivity estimates were also correlated with clinical scores to explore the after-effect of 5-day treatment. | Results
For all 26 patients, the MNI coordinates of the stimulation targets, the corresponding superficial depths, resting motor thresholds (RMT) and relevant clinical outcomes were displayed in Table 2 . No severe adverse events occurred during the whole trial and the most common side effect was headache (supplementary materials, Supplementary Table 3 ). All side effects were mild, well tolerated, and resolved rapidly after stimulation.
Suicidal ideation
Changes in suicidality scale scores were assessed with a repeated measures ANOVA. After 5 days of treatment, there was significant decrease in the BSI-CV ( F = 81.34, df = 2, 61, p < 0.001; Fig. 2a, b ), item 3 of the HAMD-17 ( F = 317.90, df = 2, 66, p < 0.001; Fig. 2a, c ), item 10 of the MADRS ( F = 314.72, df = 2, 64, p < 0.001; Fig. 2a, d ) at follow-up. The mean BSI-CV score immediately after SAINT reduced by 65.23%. Bonferroni-corrected post-hoc comparisons revealed a significant difference in scores of BSI-CV between 0 and 4 weeks after treatment, while there was no significant difference between 0 and 2 weeks after treatment. Remission and response rates of suicidal ideation after treatment were 56.25% and 65.63% (0 weeks), 59.38% and 81.25% (2 weeks), 75.00% and 93.33% (4 weeks), respectively (Table 1 ).
Depression symptoms
Statistical analysis revealed a significant effect of time (weeks) on mean HAMD-17 scores ( F = 267.30, df = 3, 93, p < 0.001; Fig. 2a, e ) and a significant effect of day on mean HAMD-6 scores ( F = 102.67, df = 3, 95, p < 0.001; Fig. 2a, h ), with scores at all follow-up time points being significantly lower than at baseline (Bonferroni-corrected pairwise comparisons, p < 0.001). These results were recapitulated for the MADRS ( F = 351.73, df = 2, 68, p < 0.001; Fig. 2a, f ) and the BDI ( F = 67.99, df = 3, 93, p < 0.001; Fig. 2 a, g). After 5 days of treatment, the mean HAMD-17 score reduced by 66.39%, and the reduction of MADRS was 58.95%. Bonferroni-corrected post-hoc comparisons demonstrated a significant difference in scores of HAMD-17 between 0 and 4 weeks after treatment, while there was no significant difference between 0 and 2 weeks after treatment. The remission rate (HAMD-17 score < 8) and response rate (a reduction ≥ 50% from baseline in HAMD-17) after treatment were 53.13% and 81.25% (0 weeks); 56.25% and 90.63% (2 weeks); 81.25% and 96.88% (4 weeks), respectively (Table 1 ).
Functional connectivity with the stimulation target
The stimulation target-based functional connectivity pattern of the pre-treatment is shown in Fig. 3 (a, b; p < 0.05 without correction). No significant differences were detected between the pre- and post-treatment (FDR correction, p < 0.05).
However, the baseline (pre-treatment) functional connectivity between the DLPFC and PCUN—and between the DLPFC and mPFC—was negatively correlated with the reduction in HAMD-17 ( p = 0.037, p = 0.039) (Fig. 3e, f ), respectively. These functional anticorrelations strengthened after rTMS treatment for the PCUN ( p = 0.033) and for the mPFC ( p = 0.029), respectively (Fig. 3g, h ). Moreover, the MADRS were also negatively correlated with the PCUN connectivity after treatment ( p = 0.015) (Fig. 3k ). We did not find any significant correlations between connectivity and suicidality.
Stimulation target-based effective connectivity analysis
Treatment effect for all subjects
In the NCECM (Fig. 4a, b ), the stimulation target (DLPFC) exerted inhibitory influences on the PCUN and INS. Signals from the INS are then sent to the sgACC, HIP, and INS itself. Since the connections from DLPFC to INS are inhibitory and connections from the INS to sgACC, INS and HIP are excitatory, these influences together resulted in inhibition and the negative functional connectivity with DLPFC seen in the sgACC, INS and HIP. These inhibitory influences then propagate from the sgACC to PCUN, and from the HIP to sgACC, and INS.
For the PCECM, we only found significant excitatory influences from DLPFC to CAU, followed by inhibitory influences on the DLPFC and mPFC, inhibiting the responses of these two brain regions (Fig. 4c, d ).
After 5-day treatment, the self-connection of the INS and the connection from the HIP to INS was significantly increased, while decreases were observed in the connectivity from the HIP to sgACC (Fig. 4e, f ). More importantly, the reduction of the BSI-CV scores was negatively correlated with the strength of the connection from the HIP to INS with p = 0.001 (Fig. 4g ) after rTMS treatment. The MADRS reduction also correlated negatively with the effective connectivity from the HIP to INS ( p = 0.021) following 5-days of treatment (Fig. 4h ).
Responders and non-responders to suicidal ideation
The distributions of the individualized stimulation targets of the responders and non-responders to suicidal ideation were displayed in Fig. 5a .
In the NCECM, the responders to suicidal ideation showed significantly increased connectivity from the HIP to DLPFC, whereas connectivity of the PCUN, INS, HIP, and the connection from HIP to sgACC (Fig. 6a ) decreased. Meanwhile, differences in connectivity were observed in the connection between the CAU-DLPFC, and CAU self-connection in suicidal ideation responders, compared to non-responders (Fig. 6b ).
Responders and non-responders to depression
The distributions of individualized stimulation targets of the responders and non-responders to depression were displayed in Fig. 5b .
In contrast to the suicidal ideation pattern, the depression responders showed increased connections in NCECM the from HIP to sgACC and INS, as well as self-connection of the DLPFC after rTMS treatment, with decreased connectivity from the sgACC to HIP, as well as the self-connection of the sgACC (Fig. 6c ). In PCECM, depression responders showed decreased connectivity in the CAU itself following rTMS treatment (Fig. 6d ).
For depression responders, the baseline (pre-treatment) self-connection of the sgACC was negatively correlated with MADRS reduction ( p = 0.033) (Fig. 6e ). The effective connectivity from the HIP to sgACC was also negatively correlated with MADRS scores after rTMS treatment ( p = 0.040) (Fig. 6f ). | Discussion
In the current study, we examined the feasibility and clinical efficacy of SAINT in the relief of suicidal ideation in patients with MDD. Our results showed that SAINT rapidly reduced the severity of suicidal ideation with a high response rate of up to 65.63% with only 5 days of treatment. Moreover, stimulation of the DLPFC targets induced changes in a brain network of regions that had negative functional connectivity (i.e., correlations) with the target region. In addition, by comparing responders and non-responders, we found that distinct changes in connectivity may contribute to the rapid effects of SAINT on the relief of suicidal ideation and amelioration of depression severity, respectively. These findings suggest that SAINT has great promise for the treatment of suicidal ideation associated with depression. More importantly, the current study could also extend our understanding of the neurobiological underpinnings of SAINT, which could further facilitate optimization of its clinical efficacy.
The current study demonstrated that SAINT is a safe and feasible way that could rapidly and effectively alleviate suicidal ideation in patients with MDD. High suicide risk in MDD is a serious public health issue, yet an effective treatment strategy which can rapidly and safely relieve suicidality in these patients remains elusive. Currently, available treatment options such as antidepressants, lithium, and psychotherapy have failed to show rapid and effective prevention effects, whereas some may even increase suicidal thoughts in early-phase pharmacotherapy [ 14 ]. There is a growing interest in the use of rTMS to reduce suicidal ideation. However, studies have shown inconsistent benefits of rTMS on suicidal ideation [ 53 ]. Earlier sham-controlled rTMS studies have reported a reduction in suicidal ideation, but the improvements were independent of active or sham stimulation [ 54 – 56 ]. After the stimulation protocol was optimized and MRI-guided precision targeting strategy was employed, suicide prevention effects of rTMS seem to have been enhanced. Pan and colleagues [ 57 ] reported that MRI-navigated high-dose rTMS treatment significantly reduced suicidal ideation relative to sham stimulation.
Recent studies have also suggested that SAINT is an effective way to relieve depression, but its suicidal prevention effects remain unaddressed [ 23 ]. In this study, we found that SAINT could effectively alleviate suicidal ideation in MDD patients, with a high response rate of up to 65.63%. Moreover, the response rate reached 78.13% and 90.63% respectively for 2 weeks and 4 weeks after SAINT. These findings could promote the development of safe and rapid suicide prevention strategies and reduce the suicide risk in patients of MDD.
The current study identified the neural pathways that might support the rapid suicide prevention and antidepressant effects of SAINT. We studied the signal propagation pathways from the rTMS targets to other rTMS responsive regions by using effective connectivity analysis, which describes directed information flow within a brain network. It is thought that the propagation from the stimulation target (DLPFC) of rTMS may be an accurate biomarker for its clinical efficacy [ 58 ]. In this study, for the first time, we identified the pathways using effective analysis from the DLPFC target to core brain systems implicated in depression. Specifically, stimulation of the DLPFC might first inhibit the activity of the PCUN and INS, from which influences were then relayed to the sgACC, resulted in suppression of enhanced limbic activation in depressed patients. These findings provide crucial support for the hypothesis that rTMS may induce its antidepressant effects through remote normalization of hyperactivity in the sgACC and other limbic regions [ 35 ].
It is worth noting that, instead of a direct inhibitory connection from the target to the sgACC, the results suggest that the stimulation effects might first propagate to the INS which then relies on the sgACC, and other core brain regions implicated in MDD. Intriguingly, although the basic idea behind SAINT is to improve the treatment efficacy by targeting the region that is most negatively functionally connected with sgACC [ 22 , 23 ], we did not find any significant correlations between the DLPFC-sgACC connectivity and depression score reductions in the current study. According to recent studies, the proximity (distance) between the actual target and the optimal DLPFC target was anticorrelated with the SGC-based functional connectivity strengths [ 29 , 59 ]. Here, the optimal potential stimulation target has already been selected as the spot with the highest anticorrelation with the sgACC, and the actual stimulation target and the optimal stimulation target should be 0 mm, which may be one of the reasons for no significant correlation were witnessed between DLPFC-sgACC functional connectivity and depression score changes.
Indeed, it is the INS acts as a hub node in the network. This is in line with the functional anatomy of the insula and sgACC. Anatomically, previous studies have reported the absence of direct anatomical connections between BA46 (DLPFC) and BA25 (sgACC) [ 60 ]. In contrast, tracer studies and in vivo fiber tracking studies have consistently identified structural connectivity of the INS with frontal, temporal, and limbic regions in the macaque monkey and human brains [ 61 – 64 ]. Functionally, the INS has been considered to be a crucial functional hub [ 65 ]. It is involved in a wide range of function including emotion regulation, salience detection, attentional control, etc. [ 65 ]. More importantly, the INS was thought to initial the switching between large-scale task negative and task positive networks [ 66 – 68 ].
Our findings also suggest that different neural mechanisms may contribute to the rapid-acting effects of SAINT on relief of suicidal ideation and amelioration of depression severity, respectively. By comparing the effective connectivity of responders and non-responders, we found that relief of suicidal ideation was specifically associated with effective connectivity of the INS and HIP, while mitigation in the severity of depression was related to connectivity of the sgACC. Consistent evidence have related the antidepressant effects of rTMS with connectivity of the sgACC [ 27 , 28 , 69 – 71 ]. An earlier study found that better treatment outcomes were associated with more negative functional connectivity between the target and sgACC [ 27 ]. This finding was further replicated in research from other groups [ 28 , 69 , 71 ]. This region thus was suggested as a possible neurobiological marker for the assessment of the clinical efficacy of antidepressant treatments [ 71 ]. Baseline sgACC metabolic activity and connectivity were found to be predictable of anti-depressive response [ 28 , 69 , 71 ], which is also replicated in our results.
Resting-state DLPFC-sgACC functional connectivity profiles also reliably differentiated responders and non-responders [ 35 ]. Furthermore, the depression circuit maps of those responders in our current study were to some extent similar by visually inspection with the convergent network proposed by Siddiqi, Schaper [ 48 ] (Supplementary Figure 1 ). On the other hand, the differences between the connectivity maps of our current study and the convergent network from Siddiqi et al. [ 48 ] were may attributed to the nature of the samples included in the manuscript were MDD patients with suicidal ideation, which in agree with our finding that different neural mechanisms may contribute to the rapid-acting effects on suicidal ideation and amelioration of depression severity.
Regarding the neural pathways contributing to the suicide prevention effects of SAINT, the current study extended previous studies by showing that the effective connectivity of the INS and HIP predicts the rapid-acting effects of SAINT on suicidal ideation. The INS is one of the core regions in the brain’s salience network, which is crucial for cognitive control [ 36 ]. Among individuals with borderline personality disorder, a disorder defined partially by recurrent suicidal behavior, the suicide attempters demonstrated decreased grey matter concentrations in the INS compared with healthy controls and non-attempters [ 72 ]. Reduced cortical thickness in INS was also reported in depressed patients with suicidal ideation [ 73 ]. A recent MEG study reported reduced gamma power which reflected imbalance in excitation-inhibition in the INS in MDD patients [ 74 ].In the current study, we showed that the self-connection which is reflective of the excitatory of this INS was reduced by SAINT. Our findings suggest that SAINT may rapidly alleviate suicidal ideation through modulating the excitatory of the INS. More importantly, it may also be possible to optimize the clinical efficacy of SAINT for suicide prevention by selecting a stimulation target which demonstrates the most negative functional/effective connectivity with INS.
We need to consider some limitations when interpreting our results. This study aimed to explore the feasibility of the Stanford Accelerated Intelligent Neuromodulation Therapy (SAINT) in rapidly relieving suicidal ideation with an open-label design without sham groups according to the original SAINT study [ 23 ], we could not rule out possible confounding effects from drugs. A double-blind, randomized, sham-controlled trial is required for further investigation to better interpreting the therapy’s underlying mechanisms and benefiting it in alleviating suicide ideation and depression. In addition, a real-time target tracking and following robot system was used to ensure that the DLPFC subregion—which was most negatively functionally connected with sgACC—received the stimulation. Thus, we were unable to collecting fMRI data simultaneously when the patients were receiving rTMS stimulation, due to the difficulty of placing the robot system in an MRI scanner. Future studies may need to replicate the findings with concurrent TMS-fMRI or TMS-EEG. Another limitation is that the multiple corrections were not performed on the correlations between connectivity and clinical score because small sample size was used in this study. This study is for the first time to explore the underlying mechanism of SAINT, we should be cautious in interpreting these results and studies with large sample sizes are better to be conducted for further exploring the neural mechanism of the SAINT. Moreover, the limitation of the effectivity should be addressed here, DCM is constructed based on Bayesian model comparison or reduction, which depends upon data itself. This procedure would simplify model itself with sacrificing data complexity [ 75 ]. In addition, considering safety issue for all patients, rTMS treatment were combined with antidepressants (i.e., Venlafaxine/Duloxetine) as previous studies [ 22 , 23 , 37 ]. Another limitation is that the recruited patients are not persons with treatment-refractory depression, the results could not be generalized to this population group. | High suicide risk represents a serious problem in patients with major depressive disorder (MDD), yet treatment options that could safely and rapidly ameliorate suicidal ideation remain elusive. Here, we tested the feasibility and preliminary efficacy of the Stanford Accelerated Intelligent Neuromodulation Therapy (SAINT) in reducing suicidal ideation in patients with MDD. Thirty-two MDD patients with moderate to severe suicidal ideation participated in the current study. Suicidal ideation and depression symptoms were assessed before and after 5 days of open-label SAINT. The neural pathways supporting rapid-acting antidepressant and suicide prevention effects were identified with dynamic causal modelling based on resting-state functional magnetic resonance imaging. We found that 5 days of SAINT effectively alleviated suicidal ideation in patients with MDD with a high response rate of 65.63%. Moreover, the response rates achieved 78.13% and 90.63% with 2 weeks and 4 weeks after SAINT, respectively. In addition, we found that the suicide prevention effects of SAINT were associated with the effective connectivity involving the insula and hippocampus, while the antidepressant effects were related to connections of the subgenual anterior cingulate cortex (sgACC). These results show that SAINT is a rapid-acting and effective way to reduce suicidal ideation. Our findings further suggest that distinct neural mechanisms may contribute to the rapid-acting effects on the relief of suicidal ideation and depression, respectively.
Subject terms | Supplementary information
| Supplementary information
The online version contains supplementary material available at 10.1038/s41398-023-02707-9.
Acknowledgements
This work was supported by the National Natural Science Foundation of China (61976248, 81974215), Clinical Research Project of Fourth Military Medical University (2021XB023), and Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions (NYKFKT2020001).
Author contributions
All authors contributed extensively to the work presented in this paper. BL and NZ conceptualized the work, analyzed the imaging data, and wrote the main paper; NT conceived the study with HW and L-BC and administered the experiment; KJF give technical support and conceptual advice; WZ, DW, JL, YC, MY, YQ, WL, WS, ML, PZ, LG, SQ administered the experiment and collected the data; L-BC, HW supervised its analysis and edited the manuscript.
Data availability
Data supporting the findings of this study are available from the corresponding author.
Competing interests
The authors declare no competing interests. | CC BY | no | 2024-01-16 23:34:59 | Transl Psychiatry. 2024 Jan 10; 14:21 | oa_package/c6/b6/PMC10781692.tar.gz |
|
PMC10782803 | 38205965 | INTRODUCTION
Accurate pathological diagnosis is crucial for managing cancer patients, and many efforts have been made for it. In virtue of artificial intelligence (AI) techniques, various diagnostic classifiers emerge, such as the ones trained from hematoxylin and eosin images [ 1–3 ], stimulated Raman histology data [ 4 ], etc. Among them, the DNA methylation (DNAm) ones show outstanding performance [ 5–8 ].
When combining the Illumina Infinium Methylation array data with AI methods, several clinical-grade DNAm classifiers were created [ 5 , 8 ], particularly suitable for individualized cancer diagnostics [ 9–12 ]. However, a comprehensive R package specially for it is lacking. Therefore, we developed the R package methylClass to fill in this gap.
Within it, various machine learning methods are covered, such as random forest (RF), support vector machine (SVM) and extreme gradient boosting (XGBoost). Among them, RF is fast and performs well, making it the most popular for DNAm classification.
On the other hand, if focusing on accuracy, SVM can outperform other methods. However, its time complexity prohibits applying it to large datasets, making SVM less popular. Hence, in this package, we develop a modified SVM method, eSVM (ensemble-based SVM), to achieve similar accuracy but take much less time, promoting the use of SVM-like classifiers.
Furthermore, we also include a multilayer perceptron neural network. However, when used on large datasets, it also shows a time-consuming problem, so we modified it to an eNeural (ensemble-based neural network) method, similar to eSVM.
In addition, the package provides some new feature selection and multi-omics integration methods. For example, the Single-Cell Manifold Preserving Feature Selection ( SCMER ) method can screen for markers preserving the manifold of original data [ 13 ], the JVis method can perform joint tSNE and uniform manifold approximation and projection (UMAP) embedding [ 14 ], and the multi-omics classification method Multi-Omics Graph cOnvolutional NETworks ( MOGONET ) can be used if other omics are available in addition to DNAm [ 15 ]. Similarly, eSVM and eNeural can also train multi-omics classifiers because of their internal ensemble framework. This is another advantage because they expand the application from SVM’s single-omic data to multi-omics data. These functions are tested with four cancer datasets here, including three methylation datasets and one multi-omics dataset. | METHODS AND RESULTS
Package overview
The package has three modules ( Figure 1 ). The first is a machine learning module. It constructs classifiers from DNAm data, including the methods of RF, SVM, XGBoost, ELNET (elastic net classification), eSVM, eNeural and MOGONET . The second module is a multi-omics module. It contains eSVM, eNeural and MOGONET . They can train a classifier not only from DNAm data but also from multi-omics data, including RNA, miRNA, copy number variation data, etc. The third module includes various assistant functions, such as feature selection, pan-cancer sample filtering, etc.
eSVM uses less running time but achieves an accuracy as high as SVM
We first tested three machine learning methods (RF, SVM and eSVM) on a central neural system (CNS) tumor dataset with 2801 samples and 91 classes (DKFZ CNS data, originally generated by German Cancer Research Center, Deutsches Krebsforschungszentrum, DKFZ) [ 5 ]. We performed a probe series experiment by selecting the top 1000, 5000, 10 000, until 30 000 most variable methylation probes from the data and then used them to construct different machine learning models. This was completed by calling the function maincv in the package. Then, the model performance was evaluated via a 5 by 5 nested cross-validation (CV), which was also fulfilled by maincv ( Figure 2A ). Finally, these model results were calibrated via ridge regression provided by the function maincalibration .
As described in Supplementary Data , eSVM was an ensemble model combining the traditional SVM model and the bagging framework and utilized the feature sampling step of bagging to relieve the time-consuming problem of SVM.
Given that the DNAm probes in the training data constructed a super-high dimensional feature space, making the samples there distributed very sparsely, a linear kernel was used for both SVM and eSVM because it could best separate sparse samples [ 16 ].
The result showed that both SVM and eSVM had a misclassification error less than RF. Before model calibration, almost all the probe numbers had a misclassification error on RF > 0.05, whereas eSVM had an average value of 0.0466, and that of SVM was 0.0317. After calibration, these two methods improved to around 0.02 (SVM average = 0.0219 and eSVM average = 0.0225), but RF only had an average of 0.0494. In addition, the ridge calibration also reduced the average mLogloss of SVM and eSVM to 0.0847 and 0.0807, whereas for RF, it was > 0.15 for most probe numbers. Hence, the advantage of support-vector-based methods was proved.
Although the accuracy of SVM and eSVM was similar, eSVM had a large advantage in running time. Among the three methods, RF was the most time-efficient. For a 5 by 5 nested CV running on 10 threads, if the probe number was 1000, the raw RF training only took 0.462 h, and as the probe number increased, the running time increased slightly until the largest value of 2.33 h for 30 000 probes. However, SVM showed a weaker performance here. For 1000 probes, it took 1.67 h to finish the nested CV, and when the probe number increased to 5000, its running time increased sharply to 6.98 h, which had been larger than the time of RF on 30 000 probes. This demonstrated the barrier to SVM being widely used. However, the running time of eSVM was much shorter. Although it needed 4.33 h for the initial 1000 probes, after that, it increased very slowly and took 4.46 h for 5000 probes, 4.52 h for 10 000 probes, until 11.5 h for 30 000 probes, much less than the SVM values of 6.98, 14.1 and 47.6 h.
For the calibration step, it was time-efficient. Ridge took an average of 0.464 h for the RF models, 0.31 h for the eSVM models and 0.332 h for the SVM models. Hence, the total time of raw model training plus calibration was similar to the raw one.
Next, we explored why eSVM had similar accuracy to SVM and compared their support vectors (SVs). If their SVs overlapped largely, the space margins they defined to separate the classes should be very close, leading to a similar accuracy. With the function maintrain in the package, we checked the models trained on the whole 2801 DKFZ samples, and for eSVM, we combined the SVs of its base learners, and the comparison showed that they covered all the 1944 SVs of the SVM model ( Figure 2B ). Hence, the margins of SVM and eSVM were similar.
Moreover, eSVM also owned 587 unique SVs, so for the total 2801 CNS samples in the dataset, 1944 were SVs shared by the two models, 587 were eSVM-specific SVs, whereas the remaining 270 were non-SVs for both models. Then, we checked the principle component analysis (PCA) embedding on the top 10 000 variable probes and found that the 2801 samples formed two clusters, and most of the 1944 SVs and the 587 eSVM-specific SVs were in the large cluster. In contrast, most of the 270 non-SVs were in the small cluster ( Figure 2C ). If looking at the cancer subclasses, most of them mixed in the large cluster ( Figure 2D ). This demonstrated that the samples in the large cluster were difficult to separate and so easy to get a penalty and became SVs. In contrast, the non-SVs were in the small cluster and were isolated from most samples in the large cluster, so they could be separated easily without penalty and became non-SVs.
We then checked what tumor subclasses were enriched in the SVs and non-SVs. For the 1944 SVs, no significantly enriched subclasses could be found. However, five subclasses were enriched in the non-SVs, including MB (G4), MB (SHH CHL AD), ETMR, MB (WNT) and MB (G3), and all of them belonged to the embryonal CNS tumor class ( Figure 2E ). Meanwhile, the eSVM-specific SVs were enriched in three subclasses of O IDH, EPN (PF A) and MNG. For their locations in the PCA embedding, four of the five non-SV subclasses were in the small PCA cluster, whereas all the three eSVM-specific SV subclasses were in the large cluster ( Figure 2F and G ).
The top variable, limma and SCMER methods select largely different features
In addition to the model algorithms, the features used for classifier training were also important. To evaluate them on the DFKZ dataset, we used the function mainfeature in the package to select three kinds of features (DNAm probes here): the top 10 k most variable (top10k) features, limma features and SCMER features. Then, the functions maincv and maincalibration were used on them to train classifiers in the 5 by 5 CV framework. The result showed that the best five models were all SVM models combined with different features and calibration methods. In contrast, the 6–10th best models were all eSVM models ( Figure 3A ). The eNeural and RF models showed a weaker performance, but eNeural was better than RF. Among the three calibration methods, multinomial ridge regression (MR) was better than logistic regression (LR) and Firth’s logistic regression (FLR) because most top models used MR for calibration. For the feature selection methods, the top models covered all of them, and no one showed a significant advantage.
Among these features, SCMER features preserved the sample embedding, and they were largely different from the top10k and limma features because they needed to preserve the sample–sample similarity matrix first, making SCMER a graph-based method sensitive to sample changes. This was shown by the five outer training sets of the 5 by 5 nested CV. Although the sets had several different samples, they always had similar top10k or limma features, but not SCMER ones ( Figure S1 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). For the top10k features, each training set had 9560 same as others, whereas, for the limma features, 9164 out of 10 000 features were shared. However, for SCMER , only 2427 out of around 10 000 features were shared because the sample changes in each training set made the sample–sample similarity matrix change largely, and SCMER was sensitive to it.
We then compared the 9560, 9164 and 2427 common features of top10k, limma and SCMER . We found that they further shared only 279 features ( Figure 3B ), which were positively enriched in the OpenSea probe island region ( Figure S2A , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ) and negatively enriched in the TSS1500 gene region ( Figure S2B , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ).
In addition, we explored the functions of these probes by mapping the TSS200, TSS1500 and 1stExon probes to genes and checked their functions. The three feature groups shared 68 genes ( Figure 3C ). Functional enrichment on these shared genes and the unique ones of each feature group showed a close relationship to neural tumors ( Figure S3 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). More details could be found in Supplementary Data.
The SVM/eSVM classifiers identify new subclasses in TCGA data
So far, SVM and eSVM were the best machine learning models, and MR was the best calibration method. Hence, we trained the six SVM/eSVM-MR classifiers from the whole DKFZ dataset and applied them to another DKFZ validation set, including SVM-MR-top10k, SVM-MR- limma , SVM-MR- SCMER , eSVM-MR-top10k, eSVM-MR- limma and eSVM-MR- SCMER classifiers. Their prediction results were further aggregated to get a unified one, as described in Supplementary Data. Because the labels of the validation dataset were not the true labels but the predictions from the DKFZ RF classifier [ 5 ], to reduce the influence of the noise from these DKFZ predictions, we stratified the samples according to their prediction confidence from the DKFZ classifier and a group with higher confidence would have less noise in its labels. Correspondingly, the performance of our classifiers improved as the noise decreased ( Figure 4A ). For the samples with confidence > 0.9, assuming that all their labels were correct, the aggregated SVM/eSVM classifier obtained a misclassification error of 0.0106, a Brier score of 0.0263 and an mLogloss of 0.0733. As the label confidence decreased, the classifier performance was weakened.
Next, we compared all the predicted labels from our SVM/eSVM model with those from the DKFZ RF classifier. We found that 118 samples were predicted divergently by them ( Figure 4B ). Because no true label was available, we could not compare their accuracy. However, some clustering internal validation indices could be used to measure the aggregation of the clusters (actually classes here) and determine the optimal classification. We used three internal validation indices: the Silhouette, Calinski–Harabasz and Davis–Bouldin indices. The former two reached the optimal at their maximum values, whereas the third reached the optimal at the minimum value. We found that our SVM/eSVM classification had a larger Silhouette (0.126 versus 0.121) and Calinski–Harabasz index (78.281 versus 78.137) but a smaller Davis–Bouldin index (2.102 versus 2.115), demonstrating that our model was better because samples in the same class tended to have smaller distance with each other. In contrast, that in different classes tended to be farther away from each other ( Figure 4C ).
Because the current SVM/eSVM predictions were aggregated from the six SVM/eSVM classifiers, we also checked these indices for these six models separately. When comparing with DKFZ RF, all of their predictions had at least one index better than it. For the eSVM-MR- limma , SVM-MR-top10k and SVM-MR- SCMER models, all the three indices were better (eSVM-MR- limma : Silhouette = 0.126 > 0.121, Calinski = 78.180 > 78.137, Davis = 2.093 < 2.115; SVM-MR-top10k: Silhouette = 0.126, Calinski = 78.283, Davis = 2.100; SVM-MR- SCMER : Silhouette = 0.124, Calinski = 78.175, Davis = 2.113). For SVM-MR- limma , two of its three indices were better than DKFZ RF (Silhouette = 0.124, Davis = 2.111). For eSVM-MR-top10k and eSVM-MR- SCMER , one index was better (eSVM-MR-top10k: Davis = 2.074, eSVM-MR- SCMER : Davis = 2.091). Hence, the performance of these single classifiers also proved that the SVM/eSVM models were more accurate than DKFZ RF.
After this validation, we extended our classifiers to two The Cancer Genome Atlas (TCGA) CNS cancer datasets, low-grade glioma (LGG) and glioblastoma (GBM), to see their methylation subclasses because TCGA classified the samples using the histological system, but their labels from the methylation system were unknown. For the 515 LGG samples, the 6 SVM/eSVM-MR classifiers aggregately predicted them as 6 methylation subclasses, including A IDH, O IDH, GBM (MES), A IDH (HG), GBM (RTK II) and others. The “others” group was a combination of the samples belonging to subclasses with ≤ 10 samples ( Figure 4D ). Then, the survival data of the predicted subclasses were compared. A significant difference was detected ( Figure 4E ). The two largest subclasses, A IDH (220 samples) and O IDH (167 samples), showed the largest survival time with a median value of 28.8 and 23.2 months, whereas the two GBM subclasses GBM (MES) (36 samples) and GBM (RTK II) (20 samples) showed the shortest survival with a median of 12 and 17 months. Hence, although these two subclasses were diagnosed as LGG via TCGA’s histological system, their short survival status was more similar to GBM samples. If checking the patient ages, these two subclasses showed a much larger one than others ( Figure 4F ), consistent with the GBM situation that the patients always had an older age.
Next, to further validate our classifications of the GBM (MES) and GBM (RTK ) samples, we introduced the Sturm GBM subtyping system, which pathologists had already used as a reference to help in diagnosis [ 17 ]. The DNAm-based tSNE embedding showed that our GBM subtypes clustered together with the corresponding samples from the Sturm reference, confirming our predictions ( Figure S4 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). The details could be found in Supplementary Data .
For the TCGA GBM dataset, our classifiers identified three main subclasses, including GBM (RTK ), GBM (MES) and GBM (RTK I), all of which still belonged to GBM ( Figure S5A , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ), and no significant difference was detected in their survival or patient age ( Figure S5B and C , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ).
The SVM/eSVM classifiers perform better than RF in sarcoma subclassification
We also tested our package on the DKFZ sarcoma dataset with 1077 samples and 65 classes [ 8 ], and the combination of different raw models, calibration methods and features showed different performances in a 5 by 5 nested CV ( Figure 5A ). SVM/eSVM still performed the best, then eNeural and finally RF. For the calibration methods, MR was better than others. For the features, SCMER achieved the smallest misclassification error of 0.00279 when combined with SVM and MR.
Next, we trained the six SVM/eSVM-MR classifiers from the whole DKFZ sarcoma dataset and applied them to another DKFZ sarcoma validation set with 428 samples. Their prediction results were aggregated together to get a unified one. Because the labels of this validation dataset were also the predictions from the DKFZ RF classifier rather than the true labels [ 8 ], the previous stratification method was used to reduce the influence of the wrong predictions from the DKFZ RF classifier. For the sample group with the largest confidence score > 0.9, the aggregated labels from our classifiers obtained an error of 0.0559. As the confidence score decreased, the divergence between our classifiers and the DKFZ one increased ( Figure 5B ).
When comparing our model predictions with the DKFZ RF model labels, 72 out of the 428 validation samples were predicted differently ( Figure 5C ). Then, the three internal validation indices were used again to measure the classifications. The result showed that our classification was better than DKFZ, with both Silhouette and Calinski–Harabasz indices larger (0.119 versus 0.115 and 25.936 versus 25.309), and Davis–Bouldin index smaller (1.966 versus 2.023) ( Figure 5D ).
When checking the indices for the six single SVM/eSVM models separately, they all had three indices better than DKFZ RF (eSVM-MR-top10k: Silhouette = 0.122, Calinski = 25.924, Davies = 1.943; SVM-MR-top10k: Silhouette = 0.121, Calinski = 25.930, Davies = 1.950; eSVM-MR- SCMER : Silhouette = 0.117, Calinski = 25.692, Davies = 1.98; eSVM-MR- limma : Silhouette = 0.120, Calinski = 25.836, Davies = 1.981; SVM-MR- limma : Silhouette = 0.120, Calinski = 25.802, Davies = 1.981; SVM-MR- SCMER : Silhouette = 0.118, Calinski = 25.713, Davies = 1.989). It demonstrated the better performance of our SVM/eSVM models.
Next, we extended the six SVM/eSVM-MR classifiers to the TCGA SARC dataset containing 260 sarcoma samples and explored their methylation subclasses. After the aggregation of the six classifiers, their combined label distribution matrix showed that the samples mainly belonged to four subclasses, including USARC (93 samples), LMS (60 samples), WDLS/DDLS (27 samples) and MPNST (14 samples). Other small classes with ≤ 10 samples were combined (a total of 66 samples) ( Figure 5E ). The survival analysis did not show a significant difference among them ( Figure 5F ). However, a significant difference was found in the patient ages because the USARC patients were much older than others ( Figure 5G ). This was consistent with the reports that USARC frequently occurs in senior people [ 18 ].
eSVM outperforms MOGONET in multi-omics data classification
In addition to DNAm data, eSVM could also train models from multi-omics data, which was an advantage over SVM. We tested this using 1064 TCGA breast invasive carcinoma (BRCA) samples covering three omics, i.e. 450K/27K DNAm, RNA-seq and miRNA-seq. We chose this dataset because we wanted to compare the performance of eSVM with another multi-omics classifier named MOGONET , and the TCGA BRCA dataset was used in its original study to test its performance [ 15 ].
MOGONET was a graph convolutional network (GCN) specially developed for multi-omics data and was reported as better than other methods. It trained one GCN model for each omic, and then, a fully connected neural network was used to perform aggregation on all the GCN results and their interaction terms. We collected it into our package so that it could be called directly by the function maincv or maintrain . However, we added some modifications to it. The most important one was the aggregation step. Instead of aggregating on both the original GCN results and their interaction terms, we would discard the interaction terms if a dataset had a class number > 2. This was because many cancer datasets had a large class number due to the cancer heterogenicity, such as the DKFZ CNS and DKFZ sarcoma (91 and 65 classes), and their interaction term number could be > 10 000, forming a large time complexity for the aggregation step. On the other hand, to compensate for the performance impairment led by this interaction removal, we increased the base learner number so that each omic could be assigned > 1 GCN base learner, as described in Supplementary Data .
In the original MOGONET study with interaction terms, all its testing datasets had a small class number, such as the BRCA dataset here. It contained five PAM50 subtypes (Luminal A, Luminal B, basal-like, HER2 -enriched and normal-like), so the interaction number was only 125. Hence, in addition to our modified MOGONET model, we could also use the original MOGONET with interaction terms. Moreover, we completely followed the original MOGONET study to select the features for classifier training, mainly based on ANOVA.
Next, we checked the data embedding using the function mainjvisR in the package, which followed the JVis joint embedding method [ 14 ]. It generated the tSNE plots, not only for the three omics individually but also for the joint embedding integrating the sample-sample adjacency of all the omics ( Figure 6A ). The embedding demonstrated the difficulty of this classification because the five BRCA classes were always mixed.
Correspondingly, all the classifiers had a misclassification error > 0.1 in the 5 by 5 nested CV ( Figure 6B ), but the performance of eSVM was much better than MOGONET , even if all the features were selected following the MOGONET ANOVA method. The best two models were eSVM-LR and eSVM-FLR. Their error rates were both 0.107. In contrast, the error rates of all MOGONET models were ≥ 0.16. This performance was weaker than eSVM but matched their original study results (error = 0.171 in a normal 5-fold CV).
Notably, the original MOGONET always showed a smaller error than the modified one, illustrating the advantage of calculating the interaction terms for aggregation. Although we introduced more base learners to the modified model, it did not completely compensate for the loss.
Moreover, we also used eSVM on the individual omic data and found that the eSVM-RNA models performed better than the eSVM-DNAm and eSVM-miRNA ones. Their best error rate was 0.111. However, this was still weaker than the eSVM multi-omics model.
Meanwhile, another study also used this TCGA BRCA dataset to test its classifier, meth-SemiCancer [ 19 ]. However, it only used the DNAm part because meth-SemiCancer was a single-omic method to predict cancer subtypes from DNAm data. Its uniqueness was that it was a semi-supervised classifier. We were interested in it and used the TCGA BRCA data to build meth-SemiCancer . Then, we compared its performance with our DNAm models. The result showed that SVM and eSVM were more accurate than meth-SemiCancer ( Figure S6 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). The details could be found in Supplementary Data .
SVM/eSVM classify pan-cancer samples accurately
We next tried our models on a pan-cancer dataset. We randomly selected 1712 pan-cancer samples from 34 public datasets, all from the Infinium MethylationEPIC platform and covering 37 cancer types.
Although these samples had clear histological labels, because the histological system and the methylation system were different, the histological label of a sample might not match its methylation cluster. Hence, as described in Supplementary Data , we performed a filtering process and only kept the samples with a matching relationship between their histological labels and methylation clusters.
We first used mainfeature to select the dataset’s top10k, limma and SCMER probe features. Then, we combined these features to get their union. After that, mainjvisR embedded the samples based on this union. From the tSNE result, most samples with the same histological labels tended to cluster together. However, some samples were diffused into clusters dominated by a different histological label, reflecting the divergence between the histological and methylation systems ( Figure S7A , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ).
Then, we used the functions clustergrid and labelclusters on this tSNE embedding and filtered out samples without matching between their histological labels and DNAm clusters. Finally, 1198 of the 1712 samples passed the filtering process. They covered 33 histological pan-cancer classes ( Figure S7B , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). The details were in Supplementary Data .
Next, the functions maincv and maincalibration were used on these samples to construct pan-cancer classifiers with their top10k, limma and SCMER features, respectively. The 5 by 5 CV results showed that the SVM/eSVM models classified the pan-cancer samples much better than the RF model. When coupling with FLR/MR calibration and limma features, SVM reached the smallest error rate (0.00668) of all the models ( Figure 7A ). Meanwhile, the best error rate of eSVM models was 0.01, whereas that of RF was 0.0209.
Moreover, we also constructed multi-omics classifiers on this dataset with eSVM and MOGONET (modified MOGONET without interaction terms). It was fulfilled by splitting the original data probes into three groups: gene promoter probes, gene body probes and other probes. Then, each group was treated as a single omic so that multi-omics models could be built on these pseudo-multi-omics data, and the top10k, limma and SCMER features could also be selected from each pseudo-omic. The tSNE embedding from mainjvisR showed that the pan-cancer classes could be separated well across all the pseudo-omics and the feature selection methods ( Figure S8 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ), and the classification result showed that the eSVM-MR- SCMER model reached the best error of 0.0125 among all the multi-omics classifiers ( Figure 7B ). In contrast, for the MOGONET models, their best error was only 0.0518. Hence, eSVM still had a large advantage over MOGONET .
However, if applying eSVM to the individual omic data, the eSVM-MR- limma model on the promoter probe data and the eSVM-MR-top10k model on the gene body probe data could reach an error of 0.0109, better than the eSVM multi-omics models. The reason might be that the divergence among these single-omic models was small, violating the requirement of divergence when ensembling them into the multi-omics model, so the consistent error of the single-omic classifiers weakened the final ensemble.
Finally, we noted another study using the traditional SVM model to classify pan-cancer data, but the DNAm beta values of this dataset were from the RRBS (reduced-representation bisulfite sequencing) platform [ 20 ]. We checked the performance of our models on the same RRBS dataset, and the result showed that, in addition to DNAm microarray, our models could also be applied to RRBS data ( Figure S9 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). The details could be found in Supplementary Data. | METHODS AND RESULTS
Package overview
The package has three modules ( Figure 1 ). The first is a machine learning module. It constructs classifiers from DNAm data, including the methods of RF, SVM, XGBoost, ELNET (elastic net classification), eSVM, eNeural and MOGONET . The second module is a multi-omics module. It contains eSVM, eNeural and MOGONET . They can train a classifier not only from DNAm data but also from multi-omics data, including RNA, miRNA, copy number variation data, etc. The third module includes various assistant functions, such as feature selection, pan-cancer sample filtering, etc.
eSVM uses less running time but achieves an accuracy as high as SVM
We first tested three machine learning methods (RF, SVM and eSVM) on a central neural system (CNS) tumor dataset with 2801 samples and 91 classes (DKFZ CNS data, originally generated by German Cancer Research Center, Deutsches Krebsforschungszentrum, DKFZ) [ 5 ]. We performed a probe series experiment by selecting the top 1000, 5000, 10 000, until 30 000 most variable methylation probes from the data and then used them to construct different machine learning models. This was completed by calling the function maincv in the package. Then, the model performance was evaluated via a 5 by 5 nested cross-validation (CV), which was also fulfilled by maincv ( Figure 2A ). Finally, these model results were calibrated via ridge regression provided by the function maincalibration .
As described in Supplementary Data , eSVM was an ensemble model combining the traditional SVM model and the bagging framework and utilized the feature sampling step of bagging to relieve the time-consuming problem of SVM.
Given that the DNAm probes in the training data constructed a super-high dimensional feature space, making the samples there distributed very sparsely, a linear kernel was used for both SVM and eSVM because it could best separate sparse samples [ 16 ].
The result showed that both SVM and eSVM had a misclassification error less than RF. Before model calibration, almost all the probe numbers had a misclassification error on RF > 0.05, whereas eSVM had an average value of 0.0466, and that of SVM was 0.0317. After calibration, these two methods improved to around 0.02 (SVM average = 0.0219 and eSVM average = 0.0225), but RF only had an average of 0.0494. In addition, the ridge calibration also reduced the average mLogloss of SVM and eSVM to 0.0847 and 0.0807, whereas for RF, it was > 0.15 for most probe numbers. Hence, the advantage of support-vector-based methods was proved.
Although the accuracy of SVM and eSVM was similar, eSVM had a large advantage in running time. Among the three methods, RF was the most time-efficient. For a 5 by 5 nested CV running on 10 threads, if the probe number was 1000, the raw RF training only took 0.462 h, and as the probe number increased, the running time increased slightly until the largest value of 2.33 h for 30 000 probes. However, SVM showed a weaker performance here. For 1000 probes, it took 1.67 h to finish the nested CV, and when the probe number increased to 5000, its running time increased sharply to 6.98 h, which had been larger than the time of RF on 30 000 probes. This demonstrated the barrier to SVM being widely used. However, the running time of eSVM was much shorter. Although it needed 4.33 h for the initial 1000 probes, after that, it increased very slowly and took 4.46 h for 5000 probes, 4.52 h for 10 000 probes, until 11.5 h for 30 000 probes, much less than the SVM values of 6.98, 14.1 and 47.6 h.
For the calibration step, it was time-efficient. Ridge took an average of 0.464 h for the RF models, 0.31 h for the eSVM models and 0.332 h for the SVM models. Hence, the total time of raw model training plus calibration was similar to the raw one.
Next, we explored why eSVM had similar accuracy to SVM and compared their support vectors (SVs). If their SVs overlapped largely, the space margins they defined to separate the classes should be very close, leading to a similar accuracy. With the function maintrain in the package, we checked the models trained on the whole 2801 DKFZ samples, and for eSVM, we combined the SVs of its base learners, and the comparison showed that they covered all the 1944 SVs of the SVM model ( Figure 2B ). Hence, the margins of SVM and eSVM were similar.
Moreover, eSVM also owned 587 unique SVs, so for the total 2801 CNS samples in the dataset, 1944 were SVs shared by the two models, 587 were eSVM-specific SVs, whereas the remaining 270 were non-SVs for both models. Then, we checked the principle component analysis (PCA) embedding on the top 10 000 variable probes and found that the 2801 samples formed two clusters, and most of the 1944 SVs and the 587 eSVM-specific SVs were in the large cluster. In contrast, most of the 270 non-SVs were in the small cluster ( Figure 2C ). If looking at the cancer subclasses, most of them mixed in the large cluster ( Figure 2D ). This demonstrated that the samples in the large cluster were difficult to separate and so easy to get a penalty and became SVs. In contrast, the non-SVs were in the small cluster and were isolated from most samples in the large cluster, so they could be separated easily without penalty and became non-SVs.
We then checked what tumor subclasses were enriched in the SVs and non-SVs. For the 1944 SVs, no significantly enriched subclasses could be found. However, five subclasses were enriched in the non-SVs, including MB (G4), MB (SHH CHL AD), ETMR, MB (WNT) and MB (G3), and all of them belonged to the embryonal CNS tumor class ( Figure 2E ). Meanwhile, the eSVM-specific SVs were enriched in three subclasses of O IDH, EPN (PF A) and MNG. For their locations in the PCA embedding, four of the five non-SV subclasses were in the small PCA cluster, whereas all the three eSVM-specific SV subclasses were in the large cluster ( Figure 2F and G ).
The top variable, limma and SCMER methods select largely different features
In addition to the model algorithms, the features used for classifier training were also important. To evaluate them on the DFKZ dataset, we used the function mainfeature in the package to select three kinds of features (DNAm probes here): the top 10 k most variable (top10k) features, limma features and SCMER features. Then, the functions maincv and maincalibration were used on them to train classifiers in the 5 by 5 CV framework. The result showed that the best five models were all SVM models combined with different features and calibration methods. In contrast, the 6–10th best models were all eSVM models ( Figure 3A ). The eNeural and RF models showed a weaker performance, but eNeural was better than RF. Among the three calibration methods, multinomial ridge regression (MR) was better than logistic regression (LR) and Firth’s logistic regression (FLR) because most top models used MR for calibration. For the feature selection methods, the top models covered all of them, and no one showed a significant advantage.
Among these features, SCMER features preserved the sample embedding, and they were largely different from the top10k and limma features because they needed to preserve the sample–sample similarity matrix first, making SCMER a graph-based method sensitive to sample changes. This was shown by the five outer training sets of the 5 by 5 nested CV. Although the sets had several different samples, they always had similar top10k or limma features, but not SCMER ones ( Figure S1 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). For the top10k features, each training set had 9560 same as others, whereas, for the limma features, 9164 out of 10 000 features were shared. However, for SCMER , only 2427 out of around 10 000 features were shared because the sample changes in each training set made the sample–sample similarity matrix change largely, and SCMER was sensitive to it.
We then compared the 9560, 9164 and 2427 common features of top10k, limma and SCMER . We found that they further shared only 279 features ( Figure 3B ), which were positively enriched in the OpenSea probe island region ( Figure S2A , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ) and negatively enriched in the TSS1500 gene region ( Figure S2B , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ).
In addition, we explored the functions of these probes by mapping the TSS200, TSS1500 and 1stExon probes to genes and checked their functions. The three feature groups shared 68 genes ( Figure 3C ). Functional enrichment on these shared genes and the unique ones of each feature group showed a close relationship to neural tumors ( Figure S3 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). More details could be found in Supplementary Data.
The SVM/eSVM classifiers identify new subclasses in TCGA data
So far, SVM and eSVM were the best machine learning models, and MR was the best calibration method. Hence, we trained the six SVM/eSVM-MR classifiers from the whole DKFZ dataset and applied them to another DKFZ validation set, including SVM-MR-top10k, SVM-MR- limma , SVM-MR- SCMER , eSVM-MR-top10k, eSVM-MR- limma and eSVM-MR- SCMER classifiers. Their prediction results were further aggregated to get a unified one, as described in Supplementary Data. Because the labels of the validation dataset were not the true labels but the predictions from the DKFZ RF classifier [ 5 ], to reduce the influence of the noise from these DKFZ predictions, we stratified the samples according to their prediction confidence from the DKFZ classifier and a group with higher confidence would have less noise in its labels. Correspondingly, the performance of our classifiers improved as the noise decreased ( Figure 4A ). For the samples with confidence > 0.9, assuming that all their labels were correct, the aggregated SVM/eSVM classifier obtained a misclassification error of 0.0106, a Brier score of 0.0263 and an mLogloss of 0.0733. As the label confidence decreased, the classifier performance was weakened.
Next, we compared all the predicted labels from our SVM/eSVM model with those from the DKFZ RF classifier. We found that 118 samples were predicted divergently by them ( Figure 4B ). Because no true label was available, we could not compare their accuracy. However, some clustering internal validation indices could be used to measure the aggregation of the clusters (actually classes here) and determine the optimal classification. We used three internal validation indices: the Silhouette, Calinski–Harabasz and Davis–Bouldin indices. The former two reached the optimal at their maximum values, whereas the third reached the optimal at the minimum value. We found that our SVM/eSVM classification had a larger Silhouette (0.126 versus 0.121) and Calinski–Harabasz index (78.281 versus 78.137) but a smaller Davis–Bouldin index (2.102 versus 2.115), demonstrating that our model was better because samples in the same class tended to have smaller distance with each other. In contrast, that in different classes tended to be farther away from each other ( Figure 4C ).
Because the current SVM/eSVM predictions were aggregated from the six SVM/eSVM classifiers, we also checked these indices for these six models separately. When comparing with DKFZ RF, all of their predictions had at least one index better than it. For the eSVM-MR- limma , SVM-MR-top10k and SVM-MR- SCMER models, all the three indices were better (eSVM-MR- limma : Silhouette = 0.126 > 0.121, Calinski = 78.180 > 78.137, Davis = 2.093 < 2.115; SVM-MR-top10k: Silhouette = 0.126, Calinski = 78.283, Davis = 2.100; SVM-MR- SCMER : Silhouette = 0.124, Calinski = 78.175, Davis = 2.113). For SVM-MR- limma , two of its three indices were better than DKFZ RF (Silhouette = 0.124, Davis = 2.111). For eSVM-MR-top10k and eSVM-MR- SCMER , one index was better (eSVM-MR-top10k: Davis = 2.074, eSVM-MR- SCMER : Davis = 2.091). Hence, the performance of these single classifiers also proved that the SVM/eSVM models were more accurate than DKFZ RF.
After this validation, we extended our classifiers to two The Cancer Genome Atlas (TCGA) CNS cancer datasets, low-grade glioma (LGG) and glioblastoma (GBM), to see their methylation subclasses because TCGA classified the samples using the histological system, but their labels from the methylation system were unknown. For the 515 LGG samples, the 6 SVM/eSVM-MR classifiers aggregately predicted them as 6 methylation subclasses, including A IDH, O IDH, GBM (MES), A IDH (HG), GBM (RTK II) and others. The “others” group was a combination of the samples belonging to subclasses with ≤ 10 samples ( Figure 4D ). Then, the survival data of the predicted subclasses were compared. A significant difference was detected ( Figure 4E ). The two largest subclasses, A IDH (220 samples) and O IDH (167 samples), showed the largest survival time with a median value of 28.8 and 23.2 months, whereas the two GBM subclasses GBM (MES) (36 samples) and GBM (RTK II) (20 samples) showed the shortest survival with a median of 12 and 17 months. Hence, although these two subclasses were diagnosed as LGG via TCGA’s histological system, their short survival status was more similar to GBM samples. If checking the patient ages, these two subclasses showed a much larger one than others ( Figure 4F ), consistent with the GBM situation that the patients always had an older age.
Next, to further validate our classifications of the GBM (MES) and GBM (RTK ) samples, we introduced the Sturm GBM subtyping system, which pathologists had already used as a reference to help in diagnosis [ 17 ]. The DNAm-based tSNE embedding showed that our GBM subtypes clustered together with the corresponding samples from the Sturm reference, confirming our predictions ( Figure S4 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). The details could be found in Supplementary Data .
For the TCGA GBM dataset, our classifiers identified three main subclasses, including GBM (RTK ), GBM (MES) and GBM (RTK I), all of which still belonged to GBM ( Figure S5A , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ), and no significant difference was detected in their survival or patient age ( Figure S5B and C , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ).
The SVM/eSVM classifiers perform better than RF in sarcoma subclassification
We also tested our package on the DKFZ sarcoma dataset with 1077 samples and 65 classes [ 8 ], and the combination of different raw models, calibration methods and features showed different performances in a 5 by 5 nested CV ( Figure 5A ). SVM/eSVM still performed the best, then eNeural and finally RF. For the calibration methods, MR was better than others. For the features, SCMER achieved the smallest misclassification error of 0.00279 when combined with SVM and MR.
Next, we trained the six SVM/eSVM-MR classifiers from the whole DKFZ sarcoma dataset and applied them to another DKFZ sarcoma validation set with 428 samples. Their prediction results were aggregated together to get a unified one. Because the labels of this validation dataset were also the predictions from the DKFZ RF classifier rather than the true labels [ 8 ], the previous stratification method was used to reduce the influence of the wrong predictions from the DKFZ RF classifier. For the sample group with the largest confidence score > 0.9, the aggregated labels from our classifiers obtained an error of 0.0559. As the confidence score decreased, the divergence between our classifiers and the DKFZ one increased ( Figure 5B ).
When comparing our model predictions with the DKFZ RF model labels, 72 out of the 428 validation samples were predicted differently ( Figure 5C ). Then, the three internal validation indices were used again to measure the classifications. The result showed that our classification was better than DKFZ, with both Silhouette and Calinski–Harabasz indices larger (0.119 versus 0.115 and 25.936 versus 25.309), and Davis–Bouldin index smaller (1.966 versus 2.023) ( Figure 5D ).
When checking the indices for the six single SVM/eSVM models separately, they all had three indices better than DKFZ RF (eSVM-MR-top10k: Silhouette = 0.122, Calinski = 25.924, Davies = 1.943; SVM-MR-top10k: Silhouette = 0.121, Calinski = 25.930, Davies = 1.950; eSVM-MR- SCMER : Silhouette = 0.117, Calinski = 25.692, Davies = 1.98; eSVM-MR- limma : Silhouette = 0.120, Calinski = 25.836, Davies = 1.981; SVM-MR- limma : Silhouette = 0.120, Calinski = 25.802, Davies = 1.981; SVM-MR- SCMER : Silhouette = 0.118, Calinski = 25.713, Davies = 1.989). It demonstrated the better performance of our SVM/eSVM models.
Next, we extended the six SVM/eSVM-MR classifiers to the TCGA SARC dataset containing 260 sarcoma samples and explored their methylation subclasses. After the aggregation of the six classifiers, their combined label distribution matrix showed that the samples mainly belonged to four subclasses, including USARC (93 samples), LMS (60 samples), WDLS/DDLS (27 samples) and MPNST (14 samples). Other small classes with ≤ 10 samples were combined (a total of 66 samples) ( Figure 5E ). The survival analysis did not show a significant difference among them ( Figure 5F ). However, a significant difference was found in the patient ages because the USARC patients were much older than others ( Figure 5G ). This was consistent with the reports that USARC frequently occurs in senior people [ 18 ].
eSVM outperforms MOGONET in multi-omics data classification
In addition to DNAm data, eSVM could also train models from multi-omics data, which was an advantage over SVM. We tested this using 1064 TCGA breast invasive carcinoma (BRCA) samples covering three omics, i.e. 450K/27K DNAm, RNA-seq and miRNA-seq. We chose this dataset because we wanted to compare the performance of eSVM with another multi-omics classifier named MOGONET , and the TCGA BRCA dataset was used in its original study to test its performance [ 15 ].
MOGONET was a graph convolutional network (GCN) specially developed for multi-omics data and was reported as better than other methods. It trained one GCN model for each omic, and then, a fully connected neural network was used to perform aggregation on all the GCN results and their interaction terms. We collected it into our package so that it could be called directly by the function maincv or maintrain . However, we added some modifications to it. The most important one was the aggregation step. Instead of aggregating on both the original GCN results and their interaction terms, we would discard the interaction terms if a dataset had a class number > 2. This was because many cancer datasets had a large class number due to the cancer heterogenicity, such as the DKFZ CNS and DKFZ sarcoma (91 and 65 classes), and their interaction term number could be > 10 000, forming a large time complexity for the aggregation step. On the other hand, to compensate for the performance impairment led by this interaction removal, we increased the base learner number so that each omic could be assigned > 1 GCN base learner, as described in Supplementary Data .
In the original MOGONET study with interaction terms, all its testing datasets had a small class number, such as the BRCA dataset here. It contained five PAM50 subtypes (Luminal A, Luminal B, basal-like, HER2 -enriched and normal-like), so the interaction number was only 125. Hence, in addition to our modified MOGONET model, we could also use the original MOGONET with interaction terms. Moreover, we completely followed the original MOGONET study to select the features for classifier training, mainly based on ANOVA.
Next, we checked the data embedding using the function mainjvisR in the package, which followed the JVis joint embedding method [ 14 ]. It generated the tSNE plots, not only for the three omics individually but also for the joint embedding integrating the sample-sample adjacency of all the omics ( Figure 6A ). The embedding demonstrated the difficulty of this classification because the five BRCA classes were always mixed.
Correspondingly, all the classifiers had a misclassification error > 0.1 in the 5 by 5 nested CV ( Figure 6B ), but the performance of eSVM was much better than MOGONET , even if all the features were selected following the MOGONET ANOVA method. The best two models were eSVM-LR and eSVM-FLR. Their error rates were both 0.107. In contrast, the error rates of all MOGONET models were ≥ 0.16. This performance was weaker than eSVM but matched their original study results (error = 0.171 in a normal 5-fold CV).
Notably, the original MOGONET always showed a smaller error than the modified one, illustrating the advantage of calculating the interaction terms for aggregation. Although we introduced more base learners to the modified model, it did not completely compensate for the loss.
Moreover, we also used eSVM on the individual omic data and found that the eSVM-RNA models performed better than the eSVM-DNAm and eSVM-miRNA ones. Their best error rate was 0.111. However, this was still weaker than the eSVM multi-omics model.
Meanwhile, another study also used this TCGA BRCA dataset to test its classifier, meth-SemiCancer [ 19 ]. However, it only used the DNAm part because meth-SemiCancer was a single-omic method to predict cancer subtypes from DNAm data. Its uniqueness was that it was a semi-supervised classifier. We were interested in it and used the TCGA BRCA data to build meth-SemiCancer . Then, we compared its performance with our DNAm models. The result showed that SVM and eSVM were more accurate than meth-SemiCancer ( Figure S6 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). The details could be found in Supplementary Data .
SVM/eSVM classify pan-cancer samples accurately
We next tried our models on a pan-cancer dataset. We randomly selected 1712 pan-cancer samples from 34 public datasets, all from the Infinium MethylationEPIC platform and covering 37 cancer types.
Although these samples had clear histological labels, because the histological system and the methylation system were different, the histological label of a sample might not match its methylation cluster. Hence, as described in Supplementary Data , we performed a filtering process and only kept the samples with a matching relationship between their histological labels and methylation clusters.
We first used mainfeature to select the dataset’s top10k, limma and SCMER probe features. Then, we combined these features to get their union. After that, mainjvisR embedded the samples based on this union. From the tSNE result, most samples with the same histological labels tended to cluster together. However, some samples were diffused into clusters dominated by a different histological label, reflecting the divergence between the histological and methylation systems ( Figure S7A , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ).
Then, we used the functions clustergrid and labelclusters on this tSNE embedding and filtered out samples without matching between their histological labels and DNAm clusters. Finally, 1198 of the 1712 samples passed the filtering process. They covered 33 histological pan-cancer classes ( Figure S7B , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). The details were in Supplementary Data .
Next, the functions maincv and maincalibration were used on these samples to construct pan-cancer classifiers with their top10k, limma and SCMER features, respectively. The 5 by 5 CV results showed that the SVM/eSVM models classified the pan-cancer samples much better than the RF model. When coupling with FLR/MR calibration and limma features, SVM reached the smallest error rate (0.00668) of all the models ( Figure 7A ). Meanwhile, the best error rate of eSVM models was 0.01, whereas that of RF was 0.0209.
Moreover, we also constructed multi-omics classifiers on this dataset with eSVM and MOGONET (modified MOGONET without interaction terms). It was fulfilled by splitting the original data probes into three groups: gene promoter probes, gene body probes and other probes. Then, each group was treated as a single omic so that multi-omics models could be built on these pseudo-multi-omics data, and the top10k, limma and SCMER features could also be selected from each pseudo-omic. The tSNE embedding from mainjvisR showed that the pan-cancer classes could be separated well across all the pseudo-omics and the feature selection methods ( Figure S8 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ), and the classification result showed that the eSVM-MR- SCMER model reached the best error of 0.0125 among all the multi-omics classifiers ( Figure 7B ). In contrast, for the MOGONET models, their best error was only 0.0518. Hence, eSVM still had a large advantage over MOGONET .
However, if applying eSVM to the individual omic data, the eSVM-MR- limma model on the promoter probe data and the eSVM-MR-top10k model on the gene body probe data could reach an error of 0.0109, better than the eSVM multi-omics models. The reason might be that the divergence among these single-omic models was small, violating the requirement of divergence when ensembling them into the multi-omics model, so the consistent error of the single-omic classifiers weakened the final ensemble.
Finally, we noted another study using the traditional SVM model to classify pan-cancer data, but the DNAm beta values of this dataset were from the RRBS (reduced-representation bisulfite sequencing) platform [ 20 ]. We checked the performance of our models on the same RRBS dataset, and the result showed that, in addition to DNAm microarray, our models could also be applied to RRBS data ( Figure S9 , see Supplementary Data available online at https://github.com/yuabrahamliu/methylClass/blob/main/README.md ). The details could be found in Supplementary Data. | DISCUSSION
DNAm profiling is a useful tool for tumor diagnosis [ 5–8 ], and large margin classifiers, such as SVM, have better accuracy than other methods to classify DNAm data. However, SVM has a time-consuming problem because its sequential minimal optimization (SMO) solving algorithm will largely slow down when many SVs are in the solution. Hence, we developed eSVM to relieve this problem. It shortens the sample vector via its feature sampling step during bagging and so reduces the time complexity of the inner product calculation on , which SMO needs. Hence, eSVM is less time-consuming than SVM.
In addition, eSVM also has another advantage: the expanded application to multi-omics data, which has promising power in cancer diagnosis, because cancer prognostication is a multi-modal problem that is driven by markers in histology, clinical data and genomics [ 1–8 ]. DNAm-based classifiers only capture part of the oncogenesis multi-omics network, and combined with other omics, they can get more information and increase accuracy.
Therefore, it is understandable that multi-omics models have better accuracy than DNAm ones, and not only multi-omics but other single-omic models, such as RNA ones, may outperform DNAm models. In our BRCA case study, in addition to the multi-omics eSVM models, other RNA-based eSVM models also showed higher accuracy than the DNAm ones. Because this result is from only one dataset, we cannot conclude that the advantage of RNA is universal, but if it were, the reason might be that the RNA transcriptional changes are more dynamic and highly correlated to the protein level and so have a more direct relationship with the cancer phenotype.
However, in practical cases, getting multi-omics data may be difficult, and even for the RNA data alone, they are not easy to obtain because RNA profiling is dependent on fresh tumor tissues and RNA is unstable [ 21 ]. In contrast, DNAm profiling can be reliably performed on formalin-fixed and paraffin-embedded tissues, and DNAm microarray always has high data quality [ 22 ]. This makes DNAm classification easy to implement. Hence, improving the DNAm single-omic classifier is necessary.
Although we have shown the contribution of eSVM in this area, other directions need to be explored, such as alternative calibration methods. This study includes the LR, FLR and MR methods. They are derived from the popular multi-response linear regression concept, aiming to predict each of the multi-classes separately from their raw predictions. Although this concept has been widely used, other algorithms, such as neural networks and the supra Bayesian procedure, may work better and are worth trying.
Based on these AI techniques, our package has shown excellent efficacy in overcoming the inter-observer variability of histopathological diagnosis, which poses a big challenge to the neural tumor case study, largely increasing the misclassification rate [ 10 , 23 , 24 ]. This problem is even more severe for the sarcoma case because approximately half of the sarcoma entities lack morphologic or molecular hallmarks [ 25 ]. However, DNAm classification solves it and improves the diagnostic precision, which can be seen from our classifiers’ high accuracy. Furthermore, our SVM/eSVM classifier has also shown an ability to revise initial histological diagnosis. It identified several GBM samples from the histological TCGA LGG dataset, and the following analysis supported this result because these samples showed strong GBM characteristics. This diagnostic change has a profound impact because it resulted in a change in the samples’ WHO (World Health Organization) grading from LGG’s lower grade (stage II or III) to GBM’s higher one (stage IV) [ 26 ].
In addition, the DNAm classifiers can continue to be reinforced because of the rapid accumulation of DNAm data to train them. For the DNAm data type, it is not restricted to microarray data; DNAm sequencing data can also be used with our package. For the sample type, it is not limited to postoperative tumor samples; cell-free DNA (cfDNA) methylation samples are another potential usage because cfDNA released by a tumor also carries its genomic and epigenetic properties and can be extracted non-invasively from body fluid [ 27 , 28 ].
In summary, we provide the package methylClass for various DNAm classification tasks. | Abstract
DNA methylation profiling is a useful tool to increase the accuracy of a cancer diagnosis. However, a comprehensive R package specially for it is lacking. Hence, we developed the R package methylClass for methylation-based classification. Within it, we provide the eSVM (ensemble-based support vector machine) model to achieve much higher accuracy in methylation data classification than the popular random forest model and overcome the time-consuming problem of the traditional SVM. In addition, some novel feature selection methods are included in the package to improve the classification. Furthermore, because methylation data can be converted to other omics, such as copy number variation data, we also provide functions for multi-omics studies. The testing of this package on four datasets shows the accurate performance of our package, especially eSVM, which can be used in both methylation and multi-omics models and outperforms other methods in both cases. methylClass is available at: https://github.com/yuabrahamliu/methylClass . | Supplementary Material | Yu Liu PhD, is a Research Fellow (VP) at Laboratory of Pathology, Center for Cancer Research, National Cancer Institute. | CC BY | no | 2024-01-16 23:36:46 | Brief Bioinform. 2024 Jan 9; 25(1):bbad485 | oa_package/0a/8b/PMC10782803.tar.gz |
|
PMC10782991 | 38222060 | Introduction
In the last two decades, economists have developed a growing interest in multidimensional measures of complex social phenomena, such as poverty and well-being ( Stiglitz et al., 2010 ), as a tool to evaluate the outcomes of public policies and the Welfare State ( Atkinson 2005 ; Pestieau & Lefebvre 2018 ). In the European context, an extensive focus by policymakers and academics has been devoted to Social Inclusion, conceptualised as an enlarged measure of poverty that accounts for the interlinks between income, education, health, and labour market conditions. As Social Inclusion features prominently within the European Union’s agenda for social integration and development, e.g. Europe 2010, 2020, and 2030 Strategies ( Atkinson et al., 2002 ; Vanhercke 2012 ), policymakers require tools to monitor its progress across country and time ( Atkinson 2005 ; Rogge & Konttinen 2018 ; Stiglitz et al., 2010 ). A few studies have proposed a specific composite measure of Social Inclusion, i.e. through the aggregation of observed performances from several sub-components (attributes) into an index. However, their results are mostly descriptive in that they do not aim to provide a normative evaluation of the observed levels of Inclusion across countries. This paper introduces a rigorous approach to the construction of a normative-based index of Social Inclusion, using European regional longitudinal data from Eurostat.
Sen & Anand (1997) highlight how there are inescapable elements of subjectivity (judgement) in any stage of creating a multidimensional index. Social Inclusion is no exception. Creating a Social Inclusion index requires facing two main empirical challenges, which call for subjective and arbitrary methodological choices ( Decancq & Lugo 2013 ; Sen & Anand 1997 ) that may affect both the results and the interpretation of the results (see, e.g. the theoretical discussions by Alkire et al., (2015) ; Bosmans et al., (2015) , Bourguignon & Chakravarty (2003) ; Decancq & Lugo (2013) ; Klugman et al., (2011) , Maggino (2017) ; Ravallion (2011) ; Stiglitz et al., (2010) ; Tsui (2002) ; and the empirical discussions by Carrino (2017) ; Cavapozzi et al., (2015) ; D’Ambrosio et al., (2011) ; Decancq et al., (2019) ; Deutsch & Silber (2005) ; Döpke et al., (2017) ; Ravallion (2012) ; Saisana et al., (2005) ; Cohen & Saisana (2014) ).
First, it requires to choose a suitable aggregation function that models the relationship between attributes of Social Inclusion. Several studies in poverty measurement have acknowledged that a linear aggregation model embeds a major shortcoming, in that it assumes perfect substitutability among sub-components (i.e. it assumes that the ‘whole’ is equal the ‘sum of its parts’), and have therefore proposed nonlinear aggregation models. However, to our knowledge, no study of Social Inclusion has yet exploited a nonlinear algorithm to characterise the degree of complementarity or substitutability for each combination of sub-components.
Second, creating a multidimensional index requires to estimate the parameters of the aggregation function (e.g. the weights). Such estimation can follow normative methods, e.g. eliciting decision-makers’ preferences, or positive methods, e.g. with data-driven approaches ( Decancq & Lugo 2013 ). Most recent studies on Social Inclusion in Europe adopt data-driven approaches, e.g. Benefit of the Doubt (BOD), Principal Component Analysis, Factor Analysis, where weights are determined by the statistical properties of the available data (see Giambona and & Vassallo (2014) ; Lefebvre et al., (2010) ; Rogge & Self (2019) and Rogge & Konttinen (2018) ). While being widely adopted, valid and relevant, data-driven methods have two main limitations ( Decancq & Lugo 2013 ). First, as its parameters lack an explicit value judgement, a data-driven index should be interpreted under a positive, also called descriptive perspective. Such an index would, for example, allow to rank countries depending on their Social Inclusion score. While this evaluation can prove highly informative, it might fail to recognise that the first- and last-ranked countries would not necessarily represent socially desirable or undesirable condition. Second, data-driven parameters have a limited economic interpretation. For example, within the BOD framework, many attributes of an index are often assigned (quasi) zero weight. While this is not an ‘unfair’ choice from a statistical perspective (e.g. Rogge & Konttinen (2018) ), this allocation is not conceived to reflect an explicit social preference. Hence, data-driven indices allow to make statements about facts, while they are not ideal to draw normative policy recommendations, i.e. statement about values ( Decancq & Lugo 2013 ). Conversely, when the parameters are elicited through normative methods (e.g. experts-elicitation), the index has a normative interpretation.
This paper introduces a novel normative and non-additive index of Social Inclusion for European regions in Belgium, Denmark, Germany, Italy and Spain between 2004 and 2017. Our conceptual and operational definition of Social Inclusion is borrowed by the work of the Atkinson commission ( Atkinson et al., 2002 ) which, since the European Council of Laeken in 2001, has been one of the cornerstones of the European initiatives to monitor countries’ progress towards reducing poverty and support the Social Policy Agenda of the EU ( UNECE 2022 ). Such framework has been widely adopted in empirical studies on this topic (e.g. Rogge & Konttinen (2018) ; Mazziotta & Pareto (2016) , Carrino (2016) ; ( Rogge & Konttinen 2018 ) Lefebvre, Coelli & Pestieau (2010) ). Our work offers three important contributions to the literature. First, we use the Choquet Integral as aggregation operator ( Grabisch 1996 ), which allows to overcome the major limitation of the standard linear model, with a more flexible approach than most alternatives employed in the literature of poverty measurement. Indeed, the Choquet Integral allows to estimate both the contribution of each attribute to the overall Social Inclusion, and the degree of complementarity/substitutability for each pair (coalition) of attributes. We argue that such an approach is well suited to measure Social Inclusion, which is characterised by attributes having potentially different pairwise interactions. Nevertheless, while the Choquet Integral has been implemented in studies on well-being, sustainability, and inequality ( Angilella et al., 2014 ; Bertin et al., 2018 ; Gajdos 2002 ; Meyer & Ponthière 2011 ; Pinar et al., 2014 ), it has not yet been applied to Social Inclusion.
Second, our index is among the first to provide a normative index of Social Inclusion, as its weights are elicited from the preferences of decision-makers and academics. To enhance the external validity of our index, we elicited the parameters of the aggregation function from the social preferences of a panel of Experts in Social Policy in Italy. Using the scenario elicitation method (e.g. Benjamin et al., (2014) ), we asked decision-makers to evaluate the level of Social Inclusion embedded in a set of fictional societies, as defined by the values of four indicators of Inclusion, selected according to the existing theory. In order to standardise the scenarios, the indicators were normalised ex ante, using a normalisation function based on the stated preferences of 100 scholars in Economics ( Carrino 2016 ).
Third, we provide novel evidence on Social Inclusion exploiting longitudinal regional data. Regions have been increasingly recognised as key development actors ( OECD 2018 ) and involved in outlining the Action Plans on Social Inclusion established by the European Council ( Rogge & Konttinen 2018 ), hence it is crucial to provide a deeper understanding of geographical discrepancies and trends in Social Inclusion within countries. Moreover, we expand upon previous work by Rogge & Self (2019) , as we use time series data to illustrate dynamics between 2004 and 2017.
The strength of our approach lies in its flexibility, joint with its normative nature. As such, we believe it is an ideal complement to the existing literature based on data-driven methods. For future research, increasing the sample size of interviewees and their geographic heterogeneity would be beneficial for statistical robustness; this however does not limit the validity of our findings given the nature of expertise of the interviewees and the clear pattern in the results.
Our results, indeed, point to large disparities in Social Inclusion in Europe between Continental countries (higher Inclusion) and Mediterranean countries (lower Inclusion), which were exacerbated after the 2008 economic crisis. This challenges previous findings from recent data-driven studies ( Rogge & Konttinen 2018 ). Finally, we show that regional variation in Social Inclusion is very pronounced in Italy, and low in Denmark and Germany.
The paper is structured as follows: Section 2 introduces the dataset; Section 3 outlines the methods; Section 4 describes the results; sensitivity and comparability analyses; while Section 5 concludes. | Methods
The Choquet integral as aggregation operator
Hereafter, we assume that Social Inclusion can be described with a bounded cardinal indicator, generated by a function W , having a vector of attributes as its arguments. The function W needs to be expressive enough to approximate its target sufficiently well, and not overly flexible, to avoid poor generalisation performance ( Hüllermeier & Schmitt 2014 ). While W has been often characterised as a linear function (e.g. Peiró-Palomino (2018) ), which is convenient for both implementation and dissemination purposes, there is a growing consensus that a linear model lacks sufficient expressiveness, as it imposes a priori perfect compensation and no-interaction among attributes, through the ‘preference independence assumption’ (the total is equal to the sum of the parts). This assumption is theoretically at-odds with the concept of Social Inclusion, where attributes should be characterised by positive or negative interactions: for example, a set of attributes with homogeneous performances might be socially preferred to a set with some attributes scoring very high and some very bad ( Angilella et al., 2014 ; Klugman, Rodríguez & Choi 2011 ; Krishnakumar 2018 , Meyer & Ponthière 2011 ). Thus, scholars have proposed non-compensative monetary (e.g. Daly et al., (1994) ; Decancq & Schokkaert (2016) ) and non-monetary aggregation functions, e.g. through the CES framework ( Bourguignon & Chakravarty 2003 ; Krishnakumar 2018 ) in its geometric ( Klugman et al., 2011 ; Rogge & Konttinen 2018 ), minimum-operator form ( Krishnakumar 2018 ), and further flexible alternatives ( Mazziotta & Pareto 2016 ), or through the counting approaches ( Alkire & Foster 2011 ). However, these frameworks do not allow to identify an interaction-coefficient for any n-tuple of attributes, nor they can assign weights to coalitions of attributes ( Meyer & Ponthière 2011 ). For example, the CES framework allows to allocate weights to each attribute separately, and to define a one-fit-all constant measure of ‘tolerated’ substitutability, without a direct control of the interaction among any n-tuples of attributes. 3
To overcome this limitation, we employ Non-Additive Measures (NAM) and the Choquet Integral as aggregation operator (see also Grabisch et al., (2008) ; Grabisch & Labreuche (2010) and Meyer & Ponthière (2011) for further details). Given that a measure is assigned to each n-tuple of attributes, Choquet integral allows to represent many preferences structures, ranging from the arithmetic/weighted mean to the minimum and maximum operator. Yet, although the Choquet Integral has been used in the economic literature on inequality ( Gajdos 2002 ), environmental sustainability ( Pinar et al., 2014 ), well-being ( Bertin et al., 2018 ; Meyer & Ponthière 2011 ), and customer satisfaction evaluation ( Angilella et al., 2014 ), it has never been applied, to the best of our knowledge, to multidimensional poverty measurement.
We now summarise some fundamental properties of the Choquet integral and of fuzzy measures. A fuzzy measure (also capacity ), defined over the index set of criteria , is a set function satisfying the following boundary and monotonicity conditions:
Such non-additive measure (NAM) assigns to every subset (coalition) of criteria a measure which can be greater, smaller, or equal than the sum of the measures of their singletons, depending on whether the criteria in the coalition are characterised by synergic-, redundant-, or no-interaction. In the latter case, the NAM collapses to the linear weighted aggregation (WA). Given a NAM μ , its Möbius representation is a set function such as the following ( Marichal 2000 ):
where , .
The inverse of 2) is called zeta transformation and is given by:
The boundary and the monotonicity conditions are given by the following constraints ( Marichal 2000 ):
Let μ be a NAM defined on N and the normalised values of the criteria belonging to N ; the (discrete) Choquet integral with respect to μ is defined as follows ( Grabisch 1997 ):
where means that the indices have been permutated in such a way that , while and . Using the Möbius representation the Choquet integral can be written as:
being the minimum operator.
To define a capacity μ on N , parameters are required. 4 For this reason, such operator is extremely flexible and allows to represent and model, ex ante, many preferences’ structure with respect to the analysed criteria, ranging from perfect complementarity (minimum operator) to perfectly substitutability (maximum operator), passing through the preference independence of the criteria (the arithmetic mean and its weighted version).
Such capacities need to be directly set by one or more Decision-Makers or implicitly elicited by means of a suitable questionnaire. The ex ante model-ability is extremely important, because—in the panel experts’ preference elicitation stage—it does not impose a priori any particular functional form that may be in contrast with the interviewee’s opinions.
Shapley values, interaction index, Orness/Andness index
In order to enhance the interpretation, summarisation and description of fuzzy measures, we will exploit three widely used behavioural indices: the Shapley value ( Shapley 1953 ), the Interaction index and the orness index (or its opposite version andness index ) ( Grabisch 1997 ; Murofushi & Soneda 1993 ).
The Shapley value is a measure of the relative importance of an attribute (criterion), considering all the marginal gains in Social Inclusion between any coalition not including the attribute, and those which include it. In terms of Möbius representation, the Shapley value of criterion i is defined as following:
with .
The Interaction index among a combination of criteria with , represents the degree of complementarity or substitutability in the coalition; when two criteria are independent, the interaction index equals zero. Its Möbius representation reads as follows ( Grabisch 1997 ):
The Orness/Andness degree , similarly to the interaction index, represents the degree of whole substitutability between the n criteria; it is a measure of the tolerance of the decision–maker’s preferences with respect to the criteria proposed. Indeed, tolerant decision-makers ( ) can accept that only some criteria are satisfied. The higher the tolerance, the closer the aggregation function is to the ‘maximum’ operator. On the other hand, intolerant decision-makers demand that most criteria are satisfied. This corresponds to a conjunctive behaviour ( ), whose extreme case is the ‘minimum’ operator. When the n criteria are additive, we have that .
By construction:
with .
Expert-based approach and sample selection
The strategies to determine the parameters of an aggregation function are commonly divided between ‘positive’ and ‘normative’ (see Decancq & Lugo (2013) for a discussion and literature review). Positive (or ‘descriptive’) methods include data-driven approaches such as Principal Component Analysis (PCA) and Data Envelopment Analysis (also known as Benefit of the Doubt, BoD). Such statistical methods are often adopted because they do not require the availability of ‘objective knowledge on the true policy weights’ ( Rogge & Konttinen 2018 ). Yet, their independence from policy or economic judgement limits the possibility to interpret their results from a normative perspective (an impossibility often referred to as the ‘Hume’s guillotine’, see Decancq & Lugo (2013) ). The BoD method assigns weights in order to maximise an underlying optimisation function. An implication of this method is that several criteria can be assigned zero, or very low, weight (as in the work by Rogge & Konttinen (2018) on Social Inclusion). The PCA method (e.g. ( Döpke et al., 2017 ; Ivaldi et al., 2016 ), which assigns weights on the basis of the observed correlation between attributes, has also been described as less suitable for policy evaluation, due its weights having statistical, yet not economic, justification ( Decancq & Lugo 2013 ; Mazziotta & Pareto 2019 ).
Normative methods aim to characterise the aggregation parameters with explicit and economically meaningful preferences, either set by the researcher (e.g. equal weighting), or by some policy targets, or derived from participatory methods (e.g. experts’ opinions). In the context of Social Inclusion, there are no comparable policy goals adopted at the European level, as countries typically set national poverty and social exclusion targets ( Social Protection Committee 2018 ). In the latter case, the choice of the expert sample is arbitrary, and it is often subject to trade-offs between resource-availability, degree of panel-expertise, and representativeness of panel, which can affect the interpretation of the results ( Kim et al., 2015 ).
In this study, we adopt a normative, expert-based, strategy to elicit the Choquet aggregation parameters. As many other authors in the field of multidimensional measurement, we build our work against the background of a well-accepted belief perhaps most famously expressed by Sen & Anand (1997) : there are inescapable elements of subjectivity (judgement) in any stage of creating a multidimensional index, from the choice of attributes to dimensions, to aggregation weights, to data normalisation. Therefore, a crucial feature of any index, Sen and Anand argue, relies on whether such subjectivity is explicitly stated, so that public scrutiny can occur. Following this rationale, we acknowledge that any weighting scheme we elicit from an experts panel is subjective, for example, because the choice of experts is arbitrary: just as for the parameters in data-driven approaches, the preferences expressed by the expert sample cannot be considered as ‘objective’, and it is possible that different expert panels would lead to different elicited weights.
We therefore adopt four steps in order to generate a composite index characterised by normative-interpretable results and transparent weights, while minimising the subjective bias.
First, we have selected a sample of experts from a homogeneous network of policy makers holding homologous positions and expertise in the field of social inclusion experts.
We chose as experts the 20 Directors General for Social Policies (DGSP) in Italy ( Direttore per le Politiche Sociali/Programmazione sociale ). The DGSPs hold an administrative (i.e. not elected) top managerial role at regional level in Italy, and are responsible for planning and coordinating social policies at local level, with a particular focus on the interchanges between poverty reduction, labour market inclusion and health-care policies, i.e. the core policy-areas of Social Inclusion. 5 The choice of a specific and homogeneous expert-sample should ensure a high level of ex ante internal consistency, as the experts are: (i) public managers at regional level in Italy; and (ii) they all oversee the planning and coordination of social policies. This should reassure the reader that, although individual competences can vary greatly, our experts all share a broad perspective and expertise on Social Inclusion, hence allowing us to collect relevant, informed, and comparable policy-preferences on Social Inclusion. Moreover, being the population of DGSPs limited, representativeness is easier to achieve. 6
Several alternatives for preference elicitation existed, several of them would require additional resources, including surveying a random population ( Kristensen & Johansson 2008 ; Pouliakas & Theodossiou 2010 ), students ( Meyer & Ponthière 2011 ), or exploiting secondary data from large surveys ( Decancq et al., 2019 ; Decancq & Schokkaert 2015 ; Noble et al., 2008 ). These strategies were unfeasible, as they would not have allowed us to collect preferences from participants with comparable and extensive experience in Social Inclusion. Alternatively, we could have selected a heterogeneous set of experts, i.e. prominent experts in different areas of the public or private sector ( Bertin, Carrino & Giove 2018 ; Chowdhury & Squire 2006 ; Cohen & Saisana 2014 ; Hoskins & Mascherini 2009 ; Pinar et al., 2014 ). However, this would reduce out ability to reach a good representativeness of the chosen expert population.
Second, as recommended by the relevant literature (e.g. Kristensen & Johansson (2008) ; Decancq & Lugo (2013) ) we normalised the core statistical variables which are then used in the preference elicitation exercise. There are two reasons for this choice, as discussed in section 3.3.3 . First, to fulfil the ‘scenario equivalence’ assumption, which states that the attribute levels in any scenario should be understood in the same way by all experts. Second, because our aim is to build a normative measure of Social Inclusion, the unit of measurement of its normalised attributes must reflect some value judgement (an expert-based normalisation function) ( Carrino 2016 ; Chowdhury & Squire 2006 ; Despic & Simonovic 2000 ; Hoskins & Mascherini 2009 ; Meyer & Ponthière 2011 ; Pouliakas & Theodossiou 2010 ). As detailed further in online supplementary material, Appendix 1.2 , in this paper we employ a min-max normalisation function estimated by Carrino (2016 , 2017 ) through interviews of 149 academics and researchers from the Department of Economics and Management at the Ca’ Foscari University of Venezia (Italy).
Third, we present and discuss the extent to which our experts’ elicited weights reflect a consensus (Section 4.1 ). While our experts will be shown to have similar preferences in terms of aggregation weights, we prudently recall that no ‘true values’ of the weights exist. In the words of Mascherini & Hoskins (2008) , ‘the judgment of one of the outliers may be correct, and those who share a consensus view may be wrong’.
Fourth, as sensitivity analysis (Section 4.3 ), we show the distribution of our Social Inclusion index across all the experts (that is, we repeatedly compute Social Inclusion for each European region by adopting the weights-set of one expert at a time, and then we show the distribution of the obtained values for Social Inclusion).
Preferences’ elicitation approach and survey design
Preferences’ elicitation approach
In order to estimate the parameters (capacities) of the Choquet Integral, we follow the Least-Squares capacity-elicitation approach, a widely used c ardinal -based fuzzy measure elicitation method which allows to identify the values of the Möbius coefficients (and thus the behavioural indices as Shapley , interaction , orness ) from the answer given by one or more experts to a suitable designed questionnaire (see Section 3.3.2 ). 7 For example, an expert is submitted a questionnaire formed by v questions. Each question represents a j th hypothetical scenario, constituted by a vector of criteria values (normalised between 0 and 10). The expert then provides an evaluation (in the same scale). The least-squares method aims at minimising the average quadratic distance between the expert’s evaluations provided and the predicted values computed by means of the Choquet integral.
The Least-Squares problem has a unique solution—i.e. the quadratic program is strictly convex—if and only if the criteria values attached to the scenarios are properly chosen (for details, see Farnia & Giove (2015) ). Bertin et al. (2018) (p.26) provide a detailed example of how the Least-Squares estimation is performed in a setting similar to ours.
Survey design
The scenario-evaluation is a well-established method to retrieve a respondent’s preferences on the known attributes of a complex phenomenon, based on the answers she gives to a specific questionnaire ( Benjamin et al., 2014 ; Bertin et al., 2018 ; Despic & Simonovic 2000 ; Green & Rao 1971 ; Kristensen & Johansson 2008 ; Meyer & Ponthière 2011 ; Pinar et al., 2014 ; Scholl et al., 2005 ). This method, an application of conjoint analysis, is preferred to the widely used strategy of budget-allocation ( Chowdhury & Squire 2006 ; Hoskins & Mascherini 2009 ; Kim, Kee & Lee 2015 ) which, by assigning weights to each attribute independently, imposes an a priori assumption of no interaction.
Through a face-to-face questionnaire, we presented each decision-maker with a finite number of scenarios depicting hypothetical societies (vignettes). Each scenario is described by a vector involving different levels of the four core attributes for employment, health, income, and education. Respondents evaluate each scenario on a stepwise cardinal scale from 0 to 10. The evaluation requires to consider all the attributes at once, and to trade them off in order to produce an overall evaluation. Thus, respondents implicitly implement their personal ‘welfare function’, as they would when facing a real scenario.
We assumed that each attribute in a scenario (specifically: longevity, long-term unemployment, poverty rate, school dropouts) can take three performance-levels, i.e. High (corresponding to a score of 10), Intermediate (5) or Low (0). We then built a full-rank matrix of 27 scenarios, which ensures the unicity of the solution to the estimation of the Choquet parameters (see Section 3.3.1 ). 8
For this exercise to be valid, we needed to satisfy the crucial ‘scenario equivalence’ assumption ( Kristensen & Johansson 2008 ), stating that the attribute levels in each scenario should be understood in the same way by all respondents. We took several steps to make this assumption believable.
First, as suggested by the literature on expert elicitation, we provided participants with examples of what was desired in clear and simple language, to guide them through the process ( Cohen & Saisana 2014 ). Second, we implemented five trivial scenarios (where all attributes, and the overall outcome, were set at either 0 (low), 2.5 (mid-low), 5 (intermediate), 7.5 (mid-high), or 10 (high)), to enhance answers’ consistency: respondents were asked to drag each scenario near the trivial scenario which they thought was most representing the embedded level of Inclusion. 9 Third, as discussed in the previous Section, we drew respondents from a homogeneous network of policy makers holding homologous positions and expertise in the field of social inclusion.
Fourth and foremost, the decision matrix was built so that each attribute-level was characterised with a specific numerical example, following a normalisation procedure described hereafter. Figure 1 illustrates some of the cards used to depict scenarios (here translated in English).
A limitation of this approach relies on the limited number of attributes that a decision-maker can simultaneously deal with in order to elicit consistent answers. This could become an issue for studies employing numerous attributes (e.g. Peiró-Palomino (2018) ). However, a possible solution would be to split the attributes set in conceptually meaningful groups, as done by, e.g. Bertin et al. (2018) .
Normalisation strategy
To avoid that each respondent interprets the verbal label differently, we characterised the labels with a specific numeric example ( Despic & Simonovic 2000 ; Pouliakas & Theodossiou 2010 ). This required us to normalise each attribute from its original scale to a fixed 0–10 scale where 0 and 10 correspond to, respectively, a ‘very bad’ and an ‘excellent’ performance, and 5 is ‘intermediate’ ( Meyer & Ponthière 2011 ). We employ the linear min–max normalisation function ( Giovannini et al., 2008 ), which rescales variables between 0 and 10 depending on how far they are from a low and a high threshold. Adopting a data-driven approach in setting the thresholds (e.g. choosing the observed minimum and maximum performance as thresholds, as in Peiró-Palomino (2018) ) leads to a unit of measurement which is free from a normative connotation, yet it is harder to determine what does it reflect in economic terms ( Carrino 2016 ; Lefebvre et al., 2010 ). If the parameters are data-driven, then the normalised variables are suitable to be interpreted under a statistical perspective. As an example, in the data-driven min–max, a variable with transformed-value equal to ‘0’ just implies it being ‘the last one’, or ‘the worst one’ observed among the available data, which does not necessarily correspond to an undesirable condition of poverty. As our aim is to build a normative measure of Social Inclusion, the unit of measurement of its normalised attributes must be normalised according to some value judgement (an expert-based normalisation function). This translates to linking the extreme values ‘0’ and the ‘100’ with, e.g. a certain definition of desirability, thus making the normalisation independent from the data. When an indicator lies above or below such fixed ‘goalposts’, further variations do not contribute to the composite measure (see e.g. the discussion in Anand & Sen (1994) ; Klugman, Rodríguez & Choi (2011) ; Ravallion (2012) ; Lefebvre et al., (2010) and Mazziotta & Pareto (2016) ).
In this paper, we follow the recommendations from the United Nations Development Programme and adopt ‘desirable’ and ‘undesirable’ targets as benchmarks, which allows us to characterise the normalisation function in a normative way. Specifically, we exploit the thresholds elicited by Carrino (2016 , 2017 ) from a large homogeneous sample of Economics scholars at the Ca’Foscari University of Venezia (Italy). The normalisation strategy is detailed in online supplementary material, Appendix 1.2 .
Experts’ preferences aggregation
Our methodology would, in principle, allow us to estimate a Choquet Integral for each decision maker, which would result in as many indices of Social Inclusion. However, to ease the interpretation of the output, we build a ‘representative’ expert (Expert Fusion), and its related Inclusion index. Many approaches can be used for the fusion of Experts’ preferences, and many of them are based on a consensus procedure (a non-exhaustive reference can be Herrera-Viedma et al., (2002) ; Li et al., (2014) ; Quesada et al., (2015) ). In this context, we apply an approach similar to Farnia & Giove (2015) , and we weight the experts’ answers based on their consistency in answering the questionnaire (rather than, e.g. on how much their answers are close to the average answer of the other decision-makers). In our perspective, the fact that an expert has a strong dissenting opinion compares to the remaining sample is not, per se, a reason to reduce its contribution to the Fusion expert. It is, however, important to detect potential cases of inconsistency, randomness, etc., in the evaluation process.
Given that NAM approach is sufficiently general to cover many preference structures, Expert’s preference has been weighted according to his/her overall consistency in judging the alternatives proposed in the Choquet context. We measure Expert’s consistency as a function of the sum of squared distances, in such a way that the greater (smaller) the sum, the smaller (greater) the contribution from the relative Expert.
Given v alternatives to be judged, we compute the index for each j th Expert:
where represents the estimated Choquet value for expert j th in the i th scenario, the Choquet value set by the expert j th in the i th scenario and the sample mean of all his/her decisions.
The weight attached to the preference of the j th Expert is computed as follows:
Given that a linear weighted combination of Möbius representations is a Möbius representation too, the final Möbius representation for the Expert Fusion can be defined in the following:
While we adopt the aforementioned method in the main analysis, we have verified that the results of our Social Inclusion index are almost identical when we use an alternative aggregation strategy of the experts’ preferences, i.e. a simple average (results available in the online supplementary material, Appendix 1.2 ).
On measurement error
In principle, both the conceptual framework and the empirical framework for Social Inclusion are vulnerable to measurement error. Indeed, the conceptual framework of Social Inclusion was introduced exactly because of the measurement error that was inherent to the theoretical approaches that essentially equated Social Exclusion to poverty. As explained by Atkinson et al., (2004) , measurement error is one of the raison d’être of Social Inclusion: as income data are inadequate to fully capture social disadvantages, non-monetary dimensions have been included to supplement information on income. As discussed in Section 2 , however, the current EU’s theoretical framework on Social Inclusion is not free from limitations, and alternative frameworks have been proposed. Nevertheless, we believe that a large number of contributions in economics, public policy and sociology since the work of the Atkinson Commission have shown that the EU’s definition of Social Inclusion is a solid, and theoretically grounded, step in the right direction towards measuring social disadvantage.
From an empirical perspective, we emphasise that the statistical indicators are not immune from measurement error (although their choice was also guided by the aim of reducing the risk for measurement error to begin with). For example, poverty rates assume that financial resources are equally divided among all those living in a household, ignoring further sources of inequality such as sex-based inequalities, in that women might be largely disadvantaged up to being at risk of poverty than men, even though the household as a whole is not ( Atkinson et al., 2004 ). Indicators on housing have been excluded from the original empirical definition of Social Inclusion, due to a lack of data quality, such as data unavailability and absence of a common set of definitions and measures, especially on homelessness and precarious housing ( Atkinson et al., 2002 ; Atkinson et al., 2004 ). Further limitations can be discussed for health, education, and labour market indicators.
Still, we believe the bias induced by measurement error on the interpretation of our empirical results is limited, for three main reasons. First, we interpret our index in terms of levels of performance rather than in terms of regions ranking. This is because of an explicit principle stated by the Atkinson commission with respect to how the indicators should be used, to limit the bias of measurement error, among other reasons. The authors note that rankings between countries are more vulnerable to measurement errors, while levels of performances are more robust ( Atkinson et al., 2004 ). Consistently with this caveat, we will restrain from emphasising small differences across countries in our Social Inclusion index.
Second, if we assume that data measurement error improves or worsens simultaneously for all countries (a plausible hypothesis given the ex ante effort made in the choice of comparable indicators), we can more confidently focus on trends within countries, and on substantial widening or narrowing gaps across countries, such as the gap emerging between Mediterranean regions (on average) and Northern regions (on average).
Third, we note that, similarly to previous works in this literature, our paper is mostly focused on the aggregation methodology, meaning the interpretation of normalisation and aggregation strategies. Our aggregation method (as our normalisation method) is relatively less subject to measurement error than data-driven methods, as the weights are derived from experts’ preferences based on fictional scenarios, where there is no measurement error. Data-driven parameters (e.g. weights) are derived from the observed performances, therefore being potentially more exposed to measurement error bias than expert-driven parameters.
Nevertheless, the measurement errors in the statistical attributes of the Social Inclusion index constitute a relevant issue when discussing the results of any index, and further research is surely needed to clarify and minimise the related induced bias. | Results
Decision-makers’ preferences
Twelve Experts were interviewed to gather preferences on the criteria considered in the composite index. In this section, the results of the decision process are shown, with a focus on the main behavioural indices listed in section 3.1 ; full results are available in online supplementary material, Appendix 1.4, Table 6 .
Referring to Shapley values, hence to the relative importance of each criterion, Figure 2 shows that Experts consider education as the main driver of social inclusion (preference’s fusion value 0.306), followed by poverty (0.257), unemployment (0.226), and life expectancy (0.209). Interestingly, education represents the most important criterion for the ‘representative expert’, while also exhibiting the lowest degree of consensus among Experts, measured in terms of volatility of preferences. Although outliers, the minimum and maximum values are 0.484 and 0.132, respectively. Poverty, unemployment, and life expectancy share a similar level of consensus in terms of volatility. In the main analysis, we will aggregate experts’ preferences in a Fusion expert preference set. However, in sensitivity analysis, we will build a specific index of Social Inclusion for each expert. We will do so because, as discussed earlier in the text, while we can value consensus as a hint to the robustness of preferences ‘the judgment of one of the outlier may be correct, and those who share a consensus view may be wrong’ ( Mascherini & Hoskins 2008 ), and we would need a more sophisticated participatory method (such as a Nominal Group Technique as in Bertin et al., (2018) ) to disentangle the extent to which experts’ opinions are really diverging.
There is no strong overall evidence of strong complementarity or substitutability among criteria, as represented by the orness index (value 0.42; see Figure 3 ). The results show that in the case of Experts preferences’ fusion there is indeed only a slight preference for complementarity. As the orness index varies in the range [0,1] with 0.5 indicating preferential independence among criteria, Experts’ preferences vary from remarkable complementarity (0.288) to light substitutability (0.578).
Interaction indices ( Figure 4 ) allow to evaluate more in details the degree of complementarity or substitutability among couples of criteria. Such index varies in the range where represents perfect substitutability and perfect complementarity.
Education & life expectancy, education & unemployment, unemployment & life expectancy represent the three couples of criteria with the highest level, although not remarkable, of complementarity; while in the first and third case, all Experts but one consider them slightly complementary criteria, in the case of education & unemployment all experts agree to not consider them substitutes. The latter is the case where Experts have shown the largest consensus among them. In the other combinations of criteria, we highlight a general low consensus among Experts given that their preferences swing around the independence case; some of them have considered such criteria more complementary than substitutes, others have considered them in the opposite manner. This clearly can be seen in the case of education & poverty or in the case of unemployment & poverty, where preferences swing around zero.
The social inclusion index
Social inclusion Index by country
We start by showing the Social Inclusion Index computed according to the preferences of the ‘Fusion’ Expert (details in Section 3.4 ). We aggregate regional Social Inclusion scores into country figures (regional scores are weighted by population size within a country) and summarise them in Figure 5 . Because of the normalisation strategy we employed, the levels of Social Inclusion can be fully interpreted as carrying value judgements, in that low levels of Inclusion convey an undesirable social condition, while high levels convey a desirable social condition. Finally, due to the potential underlying measurement error in the original statistical indicators which the index bases upon, we restrain from commenting upon small differences in levels of performance across countries.
Our results can be summarised in three main parts. First, the lowest levels of Social Inclusion are found in Italy and Spain, at any time, while Denmark exhibits the highest scores since 2010. There is no point in time during the considered interval where Italy or Spain reached the same level of Social Inclusion of Belgium, Denmark or Germany.
Second, the gap in Social Inclusion among the five countries, in particular between the Mediterranean countries and the Continental countries, increases dramatically during the observed time interval. This suggests that a polarisation has taken place during the last 15 years, in that most regions in Spain and Italy have seen their Social Inclusion either declining or stagnating, while other European regions were improving their conditions.
Third, dynamics are rather different across countries. Denmark and Germany exhibit an improvement in Social Inclusion even in the aftermath of the Great Recession. Belgium and Italy, on the other hand, show a flattening curve after the 2008 crisis, with both countries picking up on an increasing trend in 2016 and 2017. Spain’s Social Inclusion drops substantially between 2008 and 2012, although a steep recovery is shown after 2014. However, Spain’s levels of Inclusion in 2017 are far lower than in the pre-2008 years. The trends for Spain and, to a less extent for Italy, show that the Great Recession deeply hurt Mediterranean regions while Continental regions were better able to cope.
All in all, this result conveys a worrisome picture of Social Inclusion in our selected countries. The average evaluation of Social Inclusion in Denmark is more than three times higher than in Spain, in 2017, but this gap reached a factor of four during the previous years.
Heterogeneity in Social Inclusion within country
By exploiting regional disaggregation, we can elaborate more on the sub-national inequalities in Social Inclusion within countries. In Figure 6 , we plot the distribution of regional Social Inclusion by country (for selected years): the larger the boxplot, the larger the regional inequalities. Results indicate that: (i) Denmark’s (high) Social Inclusion levels are consistently homogeneous across its regions; (ii) Germany has seen both an increase in average Social Inclusion, and a reduction of its territorial difference across the time-interval; (iii) the drop in Spain’s Social Inclusion is not accompanied by an increase in its regional variability, which suggests that the decline has been largely widespread across the country, except for the latest year 2017, where the size of the boxplot suggests that the recovery process has been stronger in some areas than in others; (iv) Belgium and Italy exhibit a very high internal variability, which confirms the well-known inequalities between north and south in both countries. Indeed, and especially in the most recent years, the top 25% of Italian regions perform at similar levels than the best-performing regions in Germany, and the same holds for Belgium’s Vlaams. Interestingly, the worst 25% regions in Italy exhibited, after the financial crisis, a Social Inclusion level still higher than the median Social Inclusion levels in Spain, except for in year 2017.
A closer look to regional social inclusion
To highlight some dynamics hidden from the previous graphs, in Figure 7 , we plot the trends in Social Inclusion for the best (left panel) and worst (right panel) regions in each country (best and worst performance is computed on average across years). Among the best regions, different patterns emerge: (i) while Belgium’s Vlaams and Germany’s Baden-Württemberg exhibit a rather flat trend, Denmark’s Sjaelland sees an almost continuous catching-up in Social Inclusion; (ii) albeit both Italy’s Trentino—Alto Adige and Spain’s Navarra see their Social Inclusion decline after 2008, Trentino shows a quicker convergence with the other best performers than Navarra. A look at the worst performers (right panel) confirms several of the stylised facts already described: (iii) Denmark’s worst region is, for both levels and trends, almost undistinguishable from the ‘best’ regions in the left panel of Figure 7 , confirming the high homogeneity of Social Inclusion levels in the country; (ii) Germany’s worst region is steadily improving across the years and clearly departs from Italy’s Spain’s and Belgium’s worst performers in terms of Social Inclusion, which confirms the overall positive trend of Germany observed in Figure 5 , and suggests a sort of internal convergence which supports the evidence of declining regional variability shown in Figure 6 . Finally, Italy’s and Spain’s worst performers remain steadily on very low and worrisome levels of Social Inclusion, unlike Belgium’s Bruxelles region which shows a clear improvement in its social conditions since 2013, although remaining on levels of Inclusion which are almost three times lower than the worst observed region in Denmark, in 2017.
Sensitivity analysis
Heterogeneity by expert
The index of Inclusion described in the previous section stems from an aggregation function whose parameters are a summary of the preferences of the pool of Experts involved in the elicitation process (‘Fusion expert’). As such, they do not represent the preferences of any specific expert. Thus, one might wonder about the extent to which the Social Inclusion Index changes depending on the preferences of specific experts. We thus re-estimated the Social Inclusion Index separately for each expert and, in Figure 8 , we show (for each country and year) the distribution of the values of the Index across all the experts.
Results from this sensitivity analysis confirm the conclusions drawn from the ‘Fusion Expert’ model: the boxplots for Italy and Spain are always lying below those of Germany and Belgium, indicating that, regardless of the experts’ preferences towards the components of Social Inclusion, Mediterranean countries fare worse than continental countries. A similar point can be made for Denmark, whose boxplots always lye above the remaining countries. Belgium and Germany boxplots are overlapping across most of the time interval, indicating that different preferences on the Social Inclusion function might lead to alternatively identify Belgium or Germany as a better performing country.
Partial effects of changes in specific indicators on social inclusion, by country
In our empirical framework, Social Inclusion is a composite index of four specific indicators, described in Section 2 . From a policy perspective, it would be valuable to identify which indicators would, if improved, lead to the largest gains on Social Inclusion. We therefore analyse what is the contribution that each indicator brings to the Social Inclusion index, in each country, by estimating how the Social Inclusion index reacts to unitary improvements on normalised indicators of income, education, labour market, and health.
From an analytical perspective, suppose that is unique and let and , then the increment in the Choquet integral with respect to criterion is given by:
where the first addendum represents the direct effect on the Choquet integral of an increment in the criterion; the second term represents its indirect effect. If all criteria are independent from each other, that is , only a direct effect will exist. In this paper, we prudently restrain ourselves in only computing the direct effect of indicators on Social Inclusion. We believe that, in order to compute the indirect effects, a conceptual model of the causal links between dimensions should be established. Such model has not been detailed so far, to the best of our knowledge, and falls beyond the scope of this work.
We compute the direct effect of a unitary increased in each standardised indicator on the social inclusion index, from the baseline performance measured in 2017 (see online supplementary material, Appendix 1.6, Table 8 ). We employ the fusion of experts’ preferences to characterise the aggregation function, as in the main analysis.
Our results are shown graphically in Figure 9 , where each net reports the coefficient corresponding to the increase in Social Inclusion following a unitary improvement in the specific indicator, by country. Figure 9 also includes a table where the coefficients are explicitly outlined. Because of the non-additive nature of the experts’ preferences, and because countries baseline performances are rather different, the direct effect of each indicator on Social Inclusion differs across countries. We believe this is a small but informative additional contribution of our analysis, as it shows that policy interventions need to consider what areas of Social Inclusion could, based on experts’ evaluations, lead to the largest gains in the overall societal welfare (measured, in this case, with the Social Inclusion Index). Moreover, our results show that a top-down approach which focus on improving the same dimension in all areas might have large differential effects, depending on the starting levels of performance in that area. In general, our results show that each country would benefit from an improvement in the indicators where the current performance is worse. This explains why, for example, improvements in longevity (unemployment) lead to much smaller (larger) gains in Italy and Spain than in Denmark and Germany: longevity (unemployment) levels are much higher (lower) in Italy and Spain than they are in Denmark and Germany to begin with.
We believe this analysis should be expanded in at least two ways. First, it would be important to estimate the indirect effects that any specific improvement in a given indicator would have on the overall index. Second, future work should attempt at evaluating the monetary effectiveness of improving Social Inclusion and comparing it against the costs of implementing the policies needed to produce such improvement.
Comparability with other measures of Social Inclusion
In this section, we discuss our findings in light of previous studies which analysed Social Inclusion in Europe (and beyond) and emphasise the additional contribution of our methodology. We will limit the discussion to the countries which are included in our study.
Our results are in line with those emerging from two broad measures of social welfare such as the EU’s At-Risk-Of-Poverty-and-social-Exclusion (AROPE) index and the UN’s Human Development Index (HDI, see Section 2 for a definition). We graphically summarise the results from AROPE and HDI in the online supplementary material, Appendix 1.8 . In the HDI study, the Mediterranean countries have very similar scores, and lower than the Continental countries by around 0.1 points (10% of the total range of the HDI) in the 1990s, while this gap has narrowed to 0.05 points in more recent years. The HDI profiles of Belgium, Denmark and Germany are very close throughout the years. The HDI index has been steadily increasing from 1990s to 2010, and then became flatter in the last decade, somewhat signalling an effect of the Great Recession. As per the AROPE index, which is available since 2015, all countries see a constant reduction in the percentage of population at risk. Mediterranean countries always show a higher prevalence of social exclusion (between 25% and 30%), although Italy’s progress seems more virtuous than Spain. Continental countries range between 17% and 22%.
Conversely, our results contrast with the picture emerging from other recent studies employing data-driven techniques to generate measures of Social Inclusion using very similar (or exactly the same) indicators that we use. Rogge & Konttinen (2018) develop an index of Social Inclusion through the Benefit-of-the-Doubt method. In their study, Italy ranks as second highest performer out of 29 countries in 2015, ahead of Denmark (7th), Germany (19th) Belgium (23rd) and Spain (29th). Lefebvre et al., (2010) employ a similar method as the previous study and produce a ranking of Social Inclusion where Spain is 1st alongside Denmark, while Germany is 12th, Belgium 13th and Italy 14th. Both studies suggest that Mediterranean countries can be regarded as highly efficient, relative to the observed best practice. A similar finding emerges from the analysis of Carrino (2016) , which shows that Italy’s performance in Social Inclusion is as good as the other continental countries (while Spain performs notably worse), and improving, until the beginning of the 2010s.
The inconsistencies between the general findings of our work and the aforementioned works relies in the weighting systems adopted and the normalisation procedure. For example, in the study by Rogge & Konttinen (2018) , Italy’s score is computed by assigning 85% of the weight to the education indicator (which is Italy’s relative best indicator among those considered in the study, which excludes the longevity indicator), while Germany’s and Denmark’s are assigned 66% of the weight to the poverty indicator, 23% to education, and 5% to the remaining indicators. In the study by Lefebvre et al., (2010) , the weight assigned to longevity is, on average 17%, with 46% assigned to education, 24% to unemployment. In the data-driven study by Carrino (2016) , longevity carries 55% of the weight, labour market 25%, education and poverty 10% each. Moreover, the data are normalised based on a data-driven method (while in our paper the normalisation is informed by experts-elicited thresholds), which changes the interpretation of the units of measurement, as discussed in Section 3.3.3 .
Taken at face values, this comparison seems to suggest that a stark contrast with the weights derived from the preferences of our policy-experts (Section 4.1 ). However, we argue that attempting to compare data-driven and normative indices requires recalling that the underlying methodological rationales are very different, which reflects into an inherently different nature of the interpretations of the results. To show this, we have re-estimated our Social Inclusion index by adopting a data-driven approach as in Carrino (2016) , where we use the Principal Component Analysis as an aggregation function. We hereby summarise our findings, while a more detailed description of methods and results is available in online supplementary material, Appendix 1.8 . We have applied the PCA on a set of indicators normalised through a data-driven min–max function (as in, e.g. Döpke et al., (2017) and Peiró-Palomino (2018) ). We call to this model as pure Data-driven (D). In our PCA analysis, 41% of the weight is assigned to longevity, while around 20% of the weight goes to each of the remaining dimension. Unsurprisingly, results from model D (shown in the left panel of Figure 10 ) are consistent with the results of other studies adopting data-driven techniques ( Carrino 2016 ; Lefebvre et al., 2010 ; Rogge & Konttinen 2018 ). They show that countries have similar levels of Social Inclusion, and generally improving over time, and that Mediterranean countries are among the top performers throughout. This result is very different from that emerging from our main model, a normative model (we call it model N, in what follows), which are also reported for convenience in Figure 10 .
We argue that the policy conclusions that could be taken from model D are different from those that could emerge from the results of model N). Model N relies on assumptions that are explicitly linked to value judgements and should be considered as an informative complementary evidence to the findings of model D, which is instead grounded on statistical assumptions. As Döpke et al. (2017) put it, in the PCA method the dimension weights are determined based on proportions of explained variances, yet ‘there is no reason to suppose that a statistical property, such as the correlation between dimensions, captures meaningful trade-offs between these dimensions with respect to well-being’. Stated otherwise, the weights underlying the PCA analysis are ‘not a measure of the theoretical importance of the associated indicator’ ( Giovannini et al., 2008 ). A similar reasoning could be drawn for the data-driven methods applied in the aforementioned papers.
To conclude, comparing data-driven and normative indices requires, in the words of Decancq & Lugo (2013) , to distinguish between descriptive statements (about what is) and normative statements (about what ought to be). They refer to works of Popper (1948) and Hume (1739) , highlighting that we should not automatically ‘derive a statement about values from a statement about facts’. The data-driven methods lead to statements about facts, while the normative-driven methods lead to statements about values. Yet, more in general, our view is that normative methods should complement (not substitute for) the evidence already compiled from data-driven methods. In particular, normative methods complement data-driven methods in that they provide indices where the weights have an explicit economic interpretation, and the outcome carries a statement of values, provided that the sample of experts is clearly characterised and informative (see Section 3.3.1 ).
Sometimes, the two approaches might coincide. In the specific case of social inclusion, the two approaches lead to markedly different results. The data-driven ‘positive’ interpretation of the observed data suggests that, on average, countries performances in the four sub-dimensions of social inclusion balance out: Italy and Spain outperform the other countries in some dimensions, yet are out-performed in other dimensions, so that the overall level of performance is roughly similar across countries. The survey-driven ‘normative’ interpretation complements this view by highlighting how, under a specific set of social preferences elicited from policy experts, Italy and Spain exhibit weaknesses in several dimensions of social inclusion, which lead to a substantially worse condition of social inclusion with respect to other countries; such weaknesses are not fully compensated by the stronger performance that Italy and Spain exhibit, with respect to other countries, in other dimensions.
The contrast between the two results is entirely explained by their different conceptual approaches: as we will argue in the Conclusions to this paper, no method is objective per se. Yet, we believe that presenting both sets of results can provide the policymaker with a richer and more informed ground upon which policy choices can be made. | Discussion
While Social Inclusion has become central to the policy debate within the European Union, there is limited comparative evidence on how countries have performed in reducing Social Inclusion in the last decades. While some recent studies have provided multidimensional indices of Social Inclusion for Europe, their results do not aim to provide a normative evaluation of the observed levels of Inclusion across countries ( Decancq & Lugo 2013 ).
This paper introduced a rigorous approach for the construction of a normative-based index of Social Inclusion, using European regional longitudinal data from Eurostat.
We believe our work leads to several important contributions which have large policy relevance. First, we use a non-additive method to aggregate sub-dimensions, the Choquet Integral, which allows to separately model the relationships between each pair of sub-dimensions, e.g. their degree of complementary or substitutability. This allows us to improve on methods which consider all outcomes as independently related to Social Inclusion (e.g. the Social Protection Committee report, which defines Social Inclusion based on the share of population which exhibits either a high poverty rate, or a high material deprivation rate, or high prevalence of (quasi)jobless households). However, our method also augments non-linear approaches, such as the geometric aggregation models, which assume that the elasticity of substitution between outcomes is the same, for all pairs of outcomes. For example, our model allows for combined shortcomings in education and labour market outcomes to affect Social Inclusion differently than combined shortcomings in education and health. The importance of evaluating synergies and redundancies between sub-dimensions of Social Inclusion can prove very valuable to policymakers, as it suggests that social policies should not be designed in ‘institutional silos’, but rather with a broad perspective that acknowledges that outcomes from different social areas (e.g. labour market, education, health) interact in affecting population welfare. Moreover, our study shows that improvements in a given indicator might lead to very different effects on the overall level of Social Inclusion, depending on the initial performances exhibited by each country. Overall, improving the dimensions that exhibit poor levels of performance would lead to higher gains compared to improving a dimension that is already at a high level of performance.
Second, we elicited the parameters of the aggregation function from a population of expert decision-makers in social policies in Italy. Such experts’ preferences represent, although subjectively, a policy-perspective on actual needs, and therefore might reflect in very different weights than those derived with data-driven methods. Moreover, the data have been standardised based on judgment thresholds set a priori and independently by academic experts in Economics, rather than employing common normalisation techniques based on observed data. Hence, our index has a clear normative interpretation, and represents a direct evaluation of the overall performance of a region or country. Moreover, our methodological approach is potentially applicable to most conceptual frameworks related to multifaceted indices, e.g. indices of transparency ( Galli et al., 2017 ), active ageing ( Floridi & Lauderdale 2022 ), or well-being ( Peiró-Palomino 2018 ).
We also note that the commission which introduced the conceptual framework of Social Inclusion (see Atkinson et al., (2002) ) stressed that Social Inclusion should be evaluated by looking at performance achieved, rather than at nations’ or regions’ ranking, while Atkinson et al., (2004) highlighted that the raw indicators on which the Social Inclusion index is based should carry a strong normative interpretation. We believe that, for all the aforementioned reasons, our Social Inclusion index offers a valuable and policy-relevant contribution. Our normative method offers a strong policy perspective, as it is grounded in the preferences expressed by experts in both its aggregation and its normalisation stage.
Our results highlight an important divide in Social Inclusion between Mediterranean countries such as Italy and Spain, and Northern/Central countries such as Denmark, Germany and Belgium. On a scale from 0 (very low and socially undesirable Social Inclusion level) to 100 (very high and socially desirable Social Inclusion level), the former countries’ Inclusion index ranges between 15 and 50, between 2004 and 2017). Among the latter group, the index ranges between 50 and 80. The spread between the two groups has significantly increased after the recent financial crisis, with Italy on a stagnating path and Spain’s index dropping from a score of around 40 (pre-crisis) to 15 (post-crisis). Moreover, we showed that regional inequalities in Social Inclusion vary widely across countries. On the one hand, Italy’s regional context has grown increasingly unequal over time, with some regions faring extremely well and other extremely bad in the Social Inclusion index. Conversely, Germany and Denmark show rather small regional variation, and a narrowing trend over time. Our results, therefore, suggest that the recent decades have not seen a reduction in disparities across and within countries, and highlight the existence of areas facing levels of Inclusion which are highly undesirable. This highlights the need for Social Inclusion policies that could help reduce the gaps within countries, and increase Social Inclusion in Mediterranean countries to reduce the gap with other Continental countries.
These results can be read in light of the recent reports by the European Social Protection Committee (2018) , which focus on the monetary and labour-market dimensions of Social Inclusion. The report has documented a persistent divergence across Member States with respect to material deprivation and risk of poverty. Overall, our results provide a somewhat more worrisome picture, as our index also includes health and education outcomes, and hence is able to better represent the complexity of dimensions which underly Social Inclusion. Our findings are in line with evidence from alternative conceptual frameworks of Social Inclusion such as the Human Development Index and the European Union AROPE index ( UNECE 2022 ). Conversely, our results challenge recent evidence based on the same conceptual framework we employ, but which adopts data-driven approaches e.g. Rogge & Konttinen (2018) and Carrino (2016) , who highlighted high and improving trends in Social Inclusion for Mediterranean countries.
However, our approach is not free from limitations. First, given that we focused on administrative regions, our country selection is limited by data availability, as we could not include countries which only had data for statistical regions, or lacked several years of data for administrative regions. However, further analyses could enlarge the sample size, for example, by focusing on countries rather than on regions.
Moreover, our method, as arguably any method for multidimensional welfare evaluation, is not objective per se. For example, the index parameters as well as the results could change with the choice of the expert panel. However, since methodological ‘objectiveness’ in building a multidimension indicator of a latent abstract construct is at odds with the inherent subjectivity of such construct, several calls have been made for strategies that enhance transparency ( Sen & Anand 1997 ) and expressiveness of the methods ( Decancq & Lugo 2013 ; Hüllermeier & Schmitt 2014 ). Moreover, our approach could be applied to a larger scale, e.g. by engaging with a European or Global panel of experts, in order to build an overarching normative policy instrument for monitoring Social Inclusion.
Finally, our operationalisation of Social Inclusion, although in line with several previous studies on the topic, could be challenged. Alternative empirical models could be proposed to measure the latent phenomenon, with a larger number of sub-dimensions ( UNECE 2022 ). While an increase in the number of attributes could prove cumbersome for the experts involved in the scenario evaluation process, recent studies have shown that a complex conceptual model can be operationalised in a tree-shaped (nested) structure, where each node of the tree is constituted by a limited number of attributes ( Bertin et al., 2018 ). | Conflicts of interest : the authors declare no conflicts of interests.
Abstract
This paper introduces a normative, expert-informed, time-dependent index of Social Inclusion for European administrative regions in five countries, using longitudinal data from Eurostat. Our contribution is twofold: first, our indicator is based on a non-additive aggregation operator (the Choquet Integral), which allows us to model many preferences’ structures and to overcome the limitations embedded in other approaches. Second, we elicit the parameters of the aggregation operator from an expert panel of Italian policymakers in Social Policy, and Economics scholars. Our results highlight that Mediterranean countries exhibit lower Inclusion levels than Northern/Central countries, and that this disparity has grown in the last decade. Our results complement and partially challenge existing evidence from data-driven aggregation methods. | Social inclusion: conceptual framework and data
Promoting Social Inclusion is a priority target in the European Commission’s strategic vision, as exemplified by large policy initiatives such as the Lisbon Strategy, Europe 2020 and Europe 2030. For example, in 2010 the EU countries committed to reduce by at least 20 million the population at risk of social exclusion, defined as the population either below the relative-poverty threshold, or facing severe material deprivation, or living in (quasi-)jobless (i.e. very low work intensity) households ( Social Protection Committee 2018 ). Social Inclusion was broadly introduced in the economics literature in the 1970s to capture situations where individuals are excluded from the ‘mainstream of society’ even if not income-poor ( Sen & Anand 1997 ). The European institutions have later defined Social Inclusion as an enlarged measure of monetary poverty, as it focuses on the ‘multidimensional nature of the mechanisms whereby individuals and groups are excluded from taking part in the social exchanges, from the component practices and rights of social integration and of identity’ ( European Communities Commission 1992 ). Social Inclusion has therefore a very strong policy connotation: it is intended as a multidimensional phenomenon stemming from inadequacies or weaknesses in public services and policies from various areas, which combine and cumulate to affect both people and regions via cumulative and interdependent processes ( Atkinson et al., 2002 ).
The EU’s Social inclusion conceptual framework
This study adopts the European Union’s definition and indicators of Social Inclusion as defined by Atkinson et al. (2002) , the so-called ‘Laeken indicators’. The European Union has for a long time aimed at strengthening the role of social policy as a productive factor, and one of the means to achieve this aim has been establishing a process of information exchange that would allow a constant mutual monitoring of Social Inclusion between Member States, called ‘open method of coordination’ ( Borrás & Jacobsson 2004 ). The process of monitoring required reaching an agreement on what should be the conceptual and operational definition of Social Inclusion, which were both defined by the commission led by Tony Atkinson ( Atkinson et al., 2002 ), and presented at the Laeken European Council in 2001. Such definitions have been since then widely applied in economics and statistics works on Social Inclusion, and we refer to it as the ‘EU Social Inclusion’ concept. Basing on the experience of Member States, the commission identified five basic sub-dimensions of Social Inclusion, hereafter also attributes :
Income: material deprivation;
Labour market: lack of productive role;
Education: lack of education;
Health: poor health;
Housing: poor housing.
These attributes represent the multidimensional phenomenon by which individuals and groups are excluded from taking part in the social exchanges—to quote the aforementioned original definition of Social Inclusion. The Atkinson commission then identified ten primary statistical indicators to measure the first four dimensions. The fifth dimension (housing) was not populated with any statistical indicator, hence we could not consider it in our study. 1 We refer to Atkinson et al. (2002 , 2004) , as well as to European Commission (2009 , 2010) for further details on the rationale and limitations embedded with both the choice of the Social Inclusion attributes and on the measurement of the 10 primary indicators.
In this study, we propose a novel normative methodological method to create a multidimensional index, employing the EU’s Social Inclusion concept as a policy-relevant case study. For completeness, we briefly recall that the literature in economics and policy has developed alternative conceptual frameworks for measuring poverty and, especially, well-being, the latter being characterised by a wider scope than Social Inclusion (see UNECE (2022) for an exhaustive review of approaches to measure Social Inclusion). A prominent example is the United Nations’ Development Program’s (UNDP) Human Development Index (HDI), which includes three main dimensions (health, living standards and education), hence leaving out labour market performances which are included in the EU’s Social Inclusion ( Ravallion 2012 ). A second example is the Multidimensional Poverty Index (MPI) which was introduced to expand the scope of the HDI ( Alkire et al., 2015 ). The MPI comprises three dimensions such as health, education and living standards (which in turn includes cooking fuel, toilet facility, water access, electricity, flooring material, and assets). Another notable example is the well-being index built by the OECD ( OECD 2020 ), which is based on the conceptual framework developed by Stiglitz et al. (2010) . The OECD Well-being Framework includes 11 dimensions, which capture material conditions that shape people’s economic options (Income and Wealth; Housing; Work and Job Quality), other quality-of-life factors related to how well people feel (Health; Knowledge and Skills; Environmental Quality; Subjective Well-being; Safety), and to how connected and engaged people are (Work-Life Balance; Social Connections; Civic Engagement). Finally, the European Union has developed a narrower set of indicators to measure Social Exclusion as part of its Europe 2020 Strategy, which is referred to as the ‘at risk of poverty and social exclusion’ (AROPE) indicators ( UNECE 2022 ). The AROPE approach defines an individual as at risk of poverty or social exclusion when at least one of the following conditions hold: (a) equivalent household income below 60% of national median; (b) households with at least four of the following nine issues: i) impossibility to bear unexpected expenses, ii) cannot afford a week holiday, iii) issues with the mortgage, rent, bills; iv) cannot afford a proper meal every two days; v) not able to adequately heat the house; vi) not able to afford a washing machine vii) a colour TV viii) a phone ix) an automobile; (c) living in families whose members aged 18–59 work less than a fifth of their time. The methodological approach of our paper fully applies to the AROPE indicators, and we compare our findings with the AROPE index in the Results section. In this paper, we prefer to employ the Laeken indicators as case study, as the European Union itself recognises that the Laeken indicators ‘encompass a wider range of issues than the AROPE indicator and move beyond a focus solely on economic and labour market aspects of social exclusion’ ( UNECE 2022 ).
Empirical framework
Our empirical framework employs four statistical indicators among the ten identified by Atkinson et al. (2002) to measure the four attributes of the EU Social Inclusion framework, chosen following existing relevant studies in the field ( Lefebvre et al., 2010 ; Mazziotta & Pareto 2016 ; Rogge & Konttinen 2018 ):
poverty-rate (income)
long-term unemployment rate (labour market)
early school-leavers rate (education)
life expectancy at birth , in years (health)
Table 1 provides a brief definition of the four variables. A similar set of variables has been already used in the literature ( Lefebvre et al., 2010 ; Mazziotta & Pareto 2016 ; Rogge & Konttinen 2018 ). We argue that these indicators, although not free from limitations, are relevant and valid, as they cover the most relevant concerns of a modern welfare state, and they reflect aspects that are suitable to analyses that want to enlarge the concept of GDP to better measure social welfare ( Lefebvre et al., 2010 ). In particular, let us briefly discuss each chosen attribute:
The poverty rate indicator (the percentage falling below income thresholds) is one of the most widely used indicators for the risk of poverty. Compared to an absolute measure of income, it minimises the risk of biases from measurement errors, while providing a context-specific and informative measure of financial shortcomings. Nevertheless, the concept of social exclusion is inherently characterised by the realisation that low income may not be, per se, a reliable indicator of social exclusion, which would require to also evaluate the conditions of other resources and needs of individuals.
Among the latter factors, shortcomings in the labour market are considered crucial determinants of social security and welfare. In particular, the long-term unemployment rate (percentage unemployed for a year or more) is a key predictor of poverty, as it captures non-transitory periods without the monetary and non-monetary benefits of work, which can have long-lasting effects on individuals’ prospects.
A similar rationale justifies the choice of education (or lack thereof) as a criterion, and in particular the proportion of early school leavers (individuals aged 18–24 having achieved lower secondary education or less, and not currently attending education or training). Education not only enhances people's productivity at work, but also develops the capacity of individuals to lead a full life, transmitting societal norms and values. In this respect, the indicator measures low educational attainments, which have important influences in subsequent life-chances and in the risk of experiencing poverty and exclusion.
Health status indicators have long been accepted as major tools to measure social progress over time and countries, e.g. through the Human Development Index and the Human Poverty Index among many others. Health outcomes can include indices of mortality, morbidity, and ability to function. The most widely used health outcome is longevity at birth (which allows reliable comparability across countries and years), as ‘one major indicator of human poverty is a short life’ ( Sen & Anand 1997 ). Life expectancy is therefore adopted to capture the health dimension of Social Inclusion. Such indicator is not free from limitations, as illustrated by Atkinson et al. (2004) . For example, an alternative and perhaps more informative measure would require comparative data on life expectancy in good health (free from disability).
We choose administrative regions as the main territorial unit of this analysis, with the aim of capturing higher variability than it can be inferred from aggregate national data. Data availability is a serious constraint for analyses which focus on administrative regions, in a wide set of countries and for a long time-period ( Lefebvre et al., 2010 ; Chiappero-Martinetti & von Jacobi 2012 ). In particular, the Social Inclusion variables are available in the Eurostat Regional Database for 63 administrative regions in five countries (Belgium, Denmark, Germany, Italy, and Spain), for years 2004–2017 (data for Denmark start from year 2006). 2
As summarised in Table 2 , Italy and Spain exhibit the highest overall longevity levels, but they fare substantially worse in the economic and educational dimensions, besides showing a larger regional heterogeneity. School dropouts are relatively low in Denmark Belgium and Germany. However, the worst performing regions in Belgium and Germany (Bruxelles and Saarland, respectively) exhibit dropouts larger than 20%. In Italy and in Spain, the range between best and worst regions amounts to 26 percentage points (Italy), and 45 percentage points (Spain). Similarly, although average long-term unemployment is similar across the five countries (with the lowest levels recorded in Denmark), regional heterogeneity is substantially higher in Italy and Spain. Average poverty rates are higher in Italy and especially in Spain, where again regional divides are substantial.
Figure 11 (in online supplementary material, Appendix 1.1 ) summarises the time trends for the four indicators, highlighting converging (and improving) trends for school dropout rates, parallel (improving) trends for longevity, and diverging trends for poverty- and unemployment-rates. Table 3 includes the correlation coefficients among the four indicators of Social Inclusion.
Supplementary Material | Acknowledgments
The authors thank Michele Bernasconi, Giovanni Bertin, Danilo Cavapozzi, Martina Celidoni, Koen Decancq, Marco Fattore, and Filomena Maggino for their valuable comments to previous versions of this paper. The authors are most grateful to the public servants who took part to the study. Ludovico Carrino is particularly thankful to Maurizio Zenezini, for his precious insights at the very early stage of this project, and to Agar Brugiavini, who provided invaluable mentoring, as well as technical and logistic support, which made this paper possible. Finally, Ludovico Carrino is grateful to the whole Faculty of the Departments of Economics and Management at the University of Venezia Ca’ Foscari, for the encouragement and support provided.
Funding
Ludovico Carrino is supported by the Gateway to Global Aging Data funded by the National Institute on Aging (R01 AG030153), and by the Economic and Social Research Council, through the grant ES/S01523X/1 (IN-CARE project). This work also represents independent research partly supported by the ESRC Centre for Society and Mental Health at King’s College London (ES/S012567/1).
Data availability
The data in this study are openly available from the EUROSTAT website, at https://ec.europa.eu/eurostat/web/regions/data/database .
Supplementary material
Supplementary material is available online at Journal of the Royal Statistical Society: Series A . | CC BY | no | 2024-01-16 23:36:47 | J R Stat Soc Ser A Stat Soc. 2023 Sep 5; 187(1):229-257 | oa_package/17/08/PMC10782991.tar.gz |
|
PMC10785227 | 38223528 | Pseudo-CT Results:
The total RMSE for the pseudo-CT compared to gold-standard CT across all volumes were 98 HU (−13 ± 97 HU) for ZeDD-CT and 95 HU (−6.5 ± 94 HU) for BpCT. The BpCT is the same pseudo-CT image used in UpCT-MLAA. | DISCUSSION
This article presents the use of a Bayesian deep convolutional neural network to enhance MLAA by providing an accurate pseudo-CT prior alongside predictive uncertainty estimates that automatically modulate the strength of the priors (UpCT-MLAA). The method was evaluated in patients without and with implants with pelvic lesions. The performance for metal implant recovery and uptake estimation in pelvic lesions in patients with metal implants was characterized. This is the first work that demonstrated an MLAA algorithm for PET/MRI that was able to recover metal implants while also accurately depicting detailed anatomic structures in the pelvis. This is also the first work to synergistically combine supervised Bayesian deep learning and MLAA in a coherent framework for simultaneous PET/MRI reconstruction in the pelvis. The UpCT-MLAA method demonstrated similar quantitative uptake estimation of pelvic lesions to a state-of-the-art attenuation correction method (ZeDD-CT) while additionally providing the capability to perform reasonable PET reconstruction in the presence of metal implants and removing the need of a specialized MR pulse sequence.
One of the major advantages of using MLAA is that it uses the PET emission data to estimate the attenuation coefficients alongside the emission activity. This gives MLAA the capability to truly capture the underlying imaging conditions that the PET photons undergo. This is especially important in simultaneous PET/MRI where true ground-truth attenuation maps cannot be derived. Currently, the most successful methods for obtaining attenuation maps are through deep learning-based methods [ 20 ]–[ 28 ]. However, these methods are inherently supervised model-based techniques and have limited capacity to capture imaging conditions that were not present in the training set nor conditions that cannot be reliably modeled, such as the movement and mismatch of bowel air and the presence of metal artifacts. Since MLAA derives the attenuation maps from the PET emission data, MLAA can derive actual imaging conditions that supervised model-based techniques are unable to capture. Furthermore, this eliminates the need for specialized MR pulse sequence (such as ZTE for bone) since the bone AC would be estimated by MLAA instead. This would allow for more accurate and precise uptake quantification in simultaneous PET/MRI.
To the best of our knowledge, only a few other methods combines MLAA with deep learning [ 39 ]–[ 42 ]. Their methods apply deep learning to denoise an MLAA reconstruction by training a deep convolutional neural network to produce an equivalent CTAC from MLAA estimates of activity and attenuation maps. This method inherently requires ground-truth CTAC maps to train the deep convolutional neural network and, thus, is affected by the same limitations that supervised deep learning and model-based methods have. Unlike their method, our method (UpCT-MLAA) preserves the underlying MLAA reconstruction while still providing the same reduction of crosstalk artifacts and noise.
Our approach is different from all other approaches because we leverage supervised Bayesian deep learning uncertainty estimation to detect rare and previously unseen structures in pseudo-CT estimation. There are only a few previous works that estimate uncertainty on pseudo-CT generation [ 57 ], [ 58 ]. Klages et al. [ 57 ] utilized a standard deep learning approach and extracted patch uncertainty but did not assess their method on cases with artifacts or implants. Hemsley et al. [ 58 ] utilized a Bayesian deep learning approach to estimate total predictive uncertainty and similarly demonstrated high uncertainty on metal artifacts. Both approaches were intended for radiotherapy planning and our work is the first to apply uncertainty estimation toward PET/MRI attenuation correction. We demonstrated how likely -map errors can be detected and resolved with the use of PET emission data through MLAA.
High uncertainty was present in many different regions. Metal artifact regions had high uncertainty because they were explicitly excluded in the training process—i.e., an out-of-distribution structure. Air pockets had high uncertainty likely because of the inconsistent correspondence of air between MRI and CT—i.e., intrinsic dataset errors. Other image artifacts (such as motion due to breathing) have high uncertainty likely due to the rare occurrence of these features in the training dataset and its inconsistency with the corresponding CT images. Bone had high uncertainty since there is practically no bone signal in the Dixon MRI. Thus, the CNN likely learned to derive the bone value based on the surrounding structure and the variance image shows the intrinsic uncertainty and limitations of estimating bone HU values from Dixon MRI. Again, these regions were highlighted by being assigned high uncertainty without the network being explicitly trained to identify these regions.
On evaluation with patients without implants, we demonstrated that BpCT was a sufficient surrogate of ZeDD-CT for attenuation correction across all lesion types: BpCT provided comparable SUV estimation on bone lesions and improved SUV estimation on soft-tissue lesions. However, the BpCT images lacked accurate estimation of bone HU values that resulted in average underestimation of bone lesion SUV values (−0.9%). The average underestimation was reduced with UpCT-MLAA (−0.3%). Although the mean underestimation values improved, the RMSE of UpCT-MLAA was higher than BpCT-AC (3.6% versus 3.2%, respectively) due to the increase in standard deviation (3.6% versus 3.1%, respectively). This trend was more apparent for soft-tissue lesions. The RMSE, mean error, and standard deviation were worse for UpCT-MLAA versus BpCT. Since the PET/MRI and CT were acquired in separate sessions, possibly months apart, there may be significant changes in tissue distribution. This could explain the increase in errors of BpCT-AC under UpCT-MLAA.
On the patients with metal implants, UpCT-MLAA was the most comparable to CTAC across all lesion types. Notably, there was an opposing trend in the PET results for lesions in/out-plane of the metal implant between BpCT-AC and the MLAA methods. These were likely due to the sources of data for reconstruction. BpCT-AC has attenuation coefficients estimated only from the MRI, whereas Naïve MLAA has attenuation coefficients estimated only from the PET emission data. The input MRI was affected by large metal artifacts due to the metal implants that make the regions appear to be large pockets of air. Thus, in BpCT-AC, the attenuation coefficients of air were assigned to the metal artifact region. For lesions in-plane of the implant, this led to a large bias due to the bulk error in attenuation coefficients and a large variance due to the large range of attenuation coefficients with BpCT-AC, while this is resolved with MLAA. For lesions outplane of the implant, the opposite trend arises. For MLAA the variance is large due to the noise in the attenuation coefficient estimates. This is resolved in BpCT-AC since the attenuation coefficients are learned for normal anatomical structures that are unaffected by metal artifacts. The combination of BpCT with MLAA through UpCT-MLAA resolved these disparities.
A major challenge to evaluate PET reconstructions in the presence of metal implants is that typical CT protocols for CTAC produce metal implant artifacts that may cause overestimation of uptake and, thus, does not serve as a true reference. Since our method relies on time-of-flight MLAA, we believe that our method would produce a more accurate AC map and, therefore, a more accurate SUV map. This is demonstrated by the lower estimates of UpCT-MLAA compared to CTAC PET. However, to have precise evaluation, a potential approach to evaluate UpCT-MLAA is to use metal artifact reduction techniques on the CT acquisition [ 43 ] or by acquiring transmission PET images [ 59 ].
Accurate co-registration of CT and MRI with metal implant artifacts was a limitation since the artifacts present themselves differently. Furthermore, the CT and MRI images were acquired in separate sessions. These can be mitigated by acquiring images sequentially in a trimodality system [ 60 ].
Another limitation of this study was the small study population. Having a larger population would allow evaluation with a larger variety of implant configurations and radiotracers and validation of the robustness of the attenuation correction strategy.
Finally, the performance of the algorithm can be further improved. In this study, we only sought to demonstrate the utility of uncertainty estimation with a Bayesian deep learning regime for the attenuation correction in the presence of metal implants: that the structure of the anatomy is preserved and implants can be recovered while still providing similar PET uptake estimation performance in pelvic lesions. Our proposed UpCT-MLAA was based on MLAA regularized with MR-based priors [ 27 ], which can be viewed as unimodal Gaussian priors. We speculate that this could be further improved by using Gaussian mixture priors for MLAA as in [ 36 ]. The major task to combine these methods would be to learn the Gaussian mixture model parameters from patients with implants. With additional tuning of the algorithm and optimization of the BCNN, UpCT-MLAA can potentially produce the most accurate and precise attenuation coefficients in all tissues and in any imaging conditions. | A major remaining challenge for magnetic resonance-based attenuation correction methods (MRAC) is their susceptibility to sources of magnetic resonance imaging (MRI) artifacts (e.g., implants and motion) and uncertainties due to the limitations of MRI contrast (e.g., accurate bone delineation and density, and separation of air/bone). We propose using a Bayesian deep convolutional neural network that in addition to generating an initial pseudo-CT from MR data, it also produces uncertainty estimates of the pseudo-CT to quantify the limitations of the MR data. These outputs are combined with the maximum-likelihood estimation of activity and attenuation (MLAA) reconstruction that uses the PET emission data to improve the attenuation maps. With the proposed approach uncertainty estimation and pseudo-CT prior for robust MLAA (UpCT-MLAA), we demonstrate accurate estimation of PET uptake in pelvic lesions and show recovery of metal implants. In patients without implants, UpCT-MLAA had acceptable but slightly higher root-mean-squared-error (RMSE) than Zero-echotime and Dixon Deep pseudo-CT when compared to CTAC. In patients with metal implants, MLAA recovered the metal implant; however, anatomy outside the implant region was obscured by noise and crosstalk artifacts. Attenuation coefficients from the pseudo-CT from Dixon MRI were accurate in normal anatomy; however, the metal implant region was estimated to have attenuation coefficients of air. UpCT-MLAA estimated attenuation coefficients of metal implants alongside accurate anatomic depiction outside of implant regions. | I ntroduction
The Quantitative accuracy of simultaneous positron emission tomography and magnetic resonance imaging (PET/MRI) depends on accurate attenuation correction. Simultaneous imaging with positron emission tomography and computed tomography (PET/CT) is the current clinical gold standard for PET attenuation correction since the CT images can be used for attenuation correction of 511-keV photons with piecewise-linear models [ 1 ]. Magnetic resonance imaging (MRI) measures spin density rather than electron density and, thus, cannot directly be used for PET attenuation correction.
A comprehensive review of attenuation correction methods for PET/MRI can be found at [ 2 ]. Briefly, current methods for attenuation correction in PET/MRI can be grouped into the following categories: atlas based, segmentation based, and machine learning based. Atlas-based methods utilize a CT atlas that is generated and registered to the acquired MRI [ 3 ]–[ 6 ]. Segmentation-based methods use special sequences such as ultrashort echo-time (UTE) [ 7 ]–[ 11 ] or zero echo-time (ZTE) [ 12 ]–[ 16 ] to estimate bone density and Dixon sequences [ 17 ]–[ 19 ] to estimate soft-tissue densities. Machine learning-based methods, including deep learning methods, use sophisticated machine learning models to learn mappings from MRI to pseudo-CT images [ 20 ]–[ 26 ] or PET transmission images [ 27 ]. There have also been methods that estimate attenuation coefficient maps from the PET emission data [ 28 ], [ 29 ] or directly correct PET emission data [ 30 ]–[ 32 ] using deep learning.
For PET alone, an alternative method for attenuation correction is “joint estimation,” also known as maximum-likelihood estimation of activity and attenuation (MLAA) [ 33 ], [ 34 ]. Rather than relying on an attenuation map that was measured or estimated with another scan or modality, the PET activity image ( -map) and PET attenuation coefficient map ( -map) are estimated jointly from the PET emission data only. However, MLAA suffers from numerous artifacts and high noise [ 35 ].
In positron emission tomography and MRI (PET/MRI), recent methods developed to overcome the limitations of MLAA include using MR-based priors [ 36 ], [ 37 ], constraining the region of joint estimation [ 38 ] or using deep learning to denoise the resulting -map and/or -map from MLAA [ 39 ]–[ 42 ]. Mehranian and Zaidi’s [ 36 ] approach of using priors improved MLAA results; however, this was not demonstrated on metal implants. Ahn et al. ’s and Fuin et al. ’s methods [ 37 ], [ 38 ] that also use priors were able to recover metal implants in the PET image reconstruction, but the -maps were missing bones and other anatomical features. Furthermore, their methods require a manual or semiautomated segmentation step to delineate the regions where to apply the correct priors (such as the metal implant region). The approaches by Hwang et al. [ 39 ]–[ 41 ] and Choi et al. [ 42 ] that utilize supervised deep learning resulted in anatomically correct and accurate -maps; however, the method was not demonstrated in the presence of metal implants.
Utilizing supervised deep learning is considered a very promising method for accurate and precise PET/MRI attenuation correction. However, the main limitation of a supervised deep learning method is the finite data set that needs to have a diverse set of well-matched inputs and outputs.
In PET/MRI, the presence of metal implants complicates training because there are resulting metal artifacts in both CT and MRI. Furthermore, the artifacts appears differently: a metal implant produces a star-like streaking pattern with high Hounsfield unit values in the CT image [ 43 ] and a signal void in the MRI image [ 37 ]. This makes registration between MRI and CT images difficult and the artifacts lead to intrinsic errors in the training dataset.
In addition, there will arguably always be edge cases and rare features that cannot be captured with enough representation in a training data set. Images of humans can have rare features not easily obtained (e.g., missing organs due to surgery, a new or uncommon implant). Under these conditions, a standard supervised deep learning approach may produce incorrect predictions and the user (or any downstream algorithm) will be unaware of the errors.
A recent study by Ladefoged et al. [ 44 ] demonstrated the importance of a high-quality data set in deep learning-based brain PET/MRI attenuation correction. A large, diverse set of at least 50 training examples were required to achieve robustness and they highlighted that the remaining errors and limitations in deep learning-based MR attenuation correction were due to “abnormal bone structures, surgical deformation, and metal implants.”
In this work, we propose the use of supervised Bayesian deep learning to estimate predictive uncertainty to detect rare or previously unseen image structures and estimate intrinsic errors that traditional supervised deep learning approaches cannot.
Bayesian deep learning provides tools to address the limitations of a finite training dataset: the estimation of epistemic and predictive uncertainty [ 45 ]. A general introduction to uncertainties in machine learning can be found at [ 46 ].
Epistemic uncertainty is the uncertainty on learned model parameters that arises due to incomplete knowledge or, in the case of supervised machine learning, the lack of training data. Epistemic uncertainty is manifested as a diverse set of different model parameters that fit the training data.
The epistemic uncertainty of the model can then be used to produce predictive uncertainty that captures if there are any features or structures that deviate from the training dataset on a test image. This allows for the detection of rare or previously unseen image structures without explicitly training to identify these structures.
Typical supervised deep learning approaches do not capture the epistemic nor predictive uncertainty because only one set of model parameters is learned and only a single prediction is produced (e.g., a single pseudo-CT image).
In this work for PET/MRI attenuation correction, the predictive uncertainty is used to automatically weight the balance between the deep learning -map prediction from MRI and the -map estimates from the PET emission data from MLAA. When the model is expected to have good performance on a region in a test image, then MLAA has minimal contribution. However, when the model is expected to have poor performance on regions in a test image, then MLAA has a stronger contribution to the attenuation coefficient estimates of those regions.
Specifically, we extend the framework of Ahn et al. ’s MLAA regularized with MR-based priors [ 37 ] and generate MR-based priors with a Bayesian convolutional neural network (BCNN) [ 47 ] that additionally provides a predictive uncertainty map to automatically modulate the strength of the MLAA priors. We demonstrate a proof-of-concept methodology that produces anatomically correct, accurate, and precise -maps with high SNR that can recover metal implants for PET/MRI attenuation correction in the pelvis.
M aterials and M ethods
Uncertainty estimation and pseudo-CT prior for robust MLAA (UpCT-MLAA) is composed of two major elements: 1) initial pseudo-CT characterization with Bayesian deep learning through the Monte Carlo Dropout [ 47 ] and 2) PET reconstruction with regularized MLAA [ 37 ]. The algorithm is depicted in Fig. 1 and each component is described in detail below.
Bayesian Deep Learning
The architecture of the BCNN is shown in Fig. 2 . It was based on the U-net-like network in [ 21 ] with the following modifications: 1) Dropout [ 47 ], [ 48 ] was included after every convolution; 2) the patch size was increased to 64 × 64 × 32 voxels; and 3) each layer’s number of channels was increased by four times to compensate for the reduction of information capacity due to the Dropout. The PyTorch software package [ 49 ] (v0.4.1, http//pytorch.org ) was used.
Inputs to the model were volume patches of the following dimensions and size: 64 pixels × 64 pixels × 32 pixels × 3 channels. Each channel was a volume patch of the bias-corrected and fat-tissue normalized Dixon in-phase image, Dixon fractional fat image, and Dixon fractional water image, respectively, at the same spatial locations [ 50 ]. The output was a corresponding pseudo-CT image with size 64 pixels × 64 pixels × 32 pixels × 1 channel. ZTE MRI was not used as inputs to this model since it has been demonstrated that accurate HU estimates can be achieved with only the Dixon MR pulse sequence [ 22 ], [ 50 ].
Model Training:
Model training was performed similarly to our previous work [ 21 ], [ 50 ]. The loss function was a combination of the -loss, gradient difference loss (GDL), and the Laplacian difference loss (LDL) where is the gradient operator, is the Laplacian operator, is the ground-truth CT image patch, and is the output pseudo-CT image patch with and . The Adam optimizer [ 51 ] ( , , , ) was used to train the neural network. An L2 regularization ( ) on the weights of the network was used. He initialization [ 52 ] was used and a minibatch of four volumetric patches was used for training on two NVIDIA GTX Titan X Pascal (NVIDIA Corporation, Santa Clara, CA, USA) graphics processing units. The models were trained for approximately 68 h to achieve 100000 iterations.
Pseudo-CT Prior and Weight Map
The generation of the pseudo-CT estimate and variance image was performed through Monte Carlo Dropout [ 47 ] with the BCNN described above. The Monte Carlo Dropout inference is outlined in Fig. 1 . A total of 243 Monte Carlo samples were performed to generate a pseudo-CT estimate and a variance map where is a sample of the BCNN with Dropout, is the input Dixon MRI, and is the number of Monte Carlo samples. Inference took approximately 40 min per patient on 8 NVIDIA K80 graphics processing units. We include a detailed description of the sources of uncertainties and variations in the supplementary material .
The pseudo-CT estimate was converted to a -map with a bilinear model [ 1 ] and the variance map was converted to a weight map with a range of 0.0 to 1.0 with the following empirical transformation: where is the variance at voxel position . The sigmoidal transformation was calibrated by inspecting the resulting variance maps. It was designed such that the transition band of the sigmoid covers the range of variances in the body and finally saturates at uncertainty values of bowel air and metal artifact regions. With the constants chosen, the transition band of the sigmoid corresponds to variances of 0 to ∼100000 HU 2 (standard deviations of 0 to ∼300 HU). The weight map was then linearly scaled to have a range of 1×10 3 to 5×10 6 , called . The low values correspond to regions with high uncertainty and, thus, the estimation for these regions would be dominated by the emission data. Additional information about the empirical transformation is provided in the supplementary material .
The weight map was additionally processed to set weights outside the body (e.g., air voxels) to 0.0 so that these were not included in MLAA reconstruction. A body mask was generated by thresholding (> −400 HU) the pseudo-CT estimate. The initial body mask was morphologically eroded by a 1-voxel radius sphere. Holes in the body were then filled in with the imfill function (Image Processing Toolbox, MATLAB 2014b) at each axial slice. The body masks were then further refined by removing arms as in our previous work [ 14 ].
Uncertainty Estimation and Pseudo-CT Prior for Robust Maximum-Likelihood Estimation of Activity and Attenuation
UpCT-MLAA is a combination of the outputs of the BCNN and regularized MLAA. The process is depicted in Fig. 1 . MRI and CT images of patients without metal implants were used to train the BCNN.
We explicitly trained the network only on patients without metal implants to force the BCNN to extrapolate on the voxel regions containing metal implant (i.e., “out-of-distribution” features) to maximize the uncertainty in these regions.
Thus, a high variance ( ) emerged in implant regions compared to a low variance in normal anatomy (0 to ~2.5×10 4 HU 2 ) with the uncertainty estimation as can be seen in Fig. 1 . The -map estimate and the weight map were then provided to the regularized MLAA [ 37 ] to perform PET reconstruction (five iterations with 28 subsets, each iteration consists of one time-of-flight ordered subsets expectation maximization with a point spread function model (TOF-OSEM) iteration and five ordered subsets transmission (OSTR) iterations, as described above, ). Specifically, the MR-based regularization term in MLAA is where indexes over each voxel in the volume. is determined from the mean pseudo-CT image and is determined from the variance image through the weight map transformation. The formulation in ( 5 ) is slightly different from that in [ 37 , Sec. 2.3.2] but has the same effect.
P atient S tudies
The study was approved by the local Institutional Review Board (IRB). Patients who were imaged with PSMA-11 signed a written informed consent form while the IRB waived the requirement for informed consent for FDG and DOTATATE studies.
Patients with pelvic lesions were scanned using an integrated 3 Tesla time-of-flight PET/MRI system [ 53 ] (SIGNA PET/MR, GE Healthcare, Chicago, IL, USA). The patient population consisted of 29 patients (Age = 58.7±13.9 years old, 16 males, 13 females): ten patients without implants were used for model training, 16 patients without implants were used for evaluation with a CT reference, and three patients with implants were used for evaluation in the presence of metal artifacts.
PET/MRI Acquisition.
The PET acquisition on the evaluation set was performed with different radiotracers: 18 F-FDG (11 patients), 68 Ga-PSMA-11 (seven patients), 68 Ga-DOTATATE (one patient). The PET scan had 600-mm transaxial field-of-view (FOV) and 25 cm axial FOV, with a time-of-flight timing resolution of approximately 400 ps. The imaging protocol included a six bed-position whole-body PET/MRI and a dedicated pelvic PET/MRI acquisition. The PET data were acquired for 15–20 min during the dedicated pelvis acquisition, during which clinical MRI sequences and the following magnetic resonance-based attenuation correction (MRAC) sequences were acquired: Dixon (FOV = 500×500×312 mm, resolution = 1.95 × 1.95 mm, slice thickness = 5.2 mm, slice spacing = 2.6 mm, and scan time = 18 s) and ZTE MR (cubical FOV = 340×340×340 mm, isotropic resolution = 2×2×2 mm, 1.36 ms readout duration, FA = 0.6°, 4 s hard RF pulse, and scan time = 123 s).
CT Imaging.
Helical CT images of the patients were acquired separately on different machines (GE Discovery STE, GE Discovery ST, Siemens Biograph 16, Siemens Biograph 6, Philips Gemini TF ToF 16, Philips Gemini TF ToF 64, Siemens SOMATOM Definition AS) and were co-registered to the MR images using the method outlined below. Multiple CT protocols were used with varying parameter settings (110–130 kVp, 30–494 mA, rotation time = 0.5 s, pitch = 0.6–1.375, 11.5–55 mm/rotation, axial FOV = 500–700 mm, slice thickness = 3–5 mm, and matrix size = 512×512).
Preprocessing consisted of filling in bowel air with softtissue HU values and copying arms from the Dixon-derived pseudo-CT due to the differences in bowel air distribution and the CT scan being acquired with arms up, respectively [ 14 ].
MRI and CT image pairs were co-registered using the ANTS [ 54 ] registration package and the SyN diffeomorphic deformation model with combined mutual information and cross-correlation metrics [ 14 ], [ 21 ], [ 50 ].
PET Reconstructions
In addition to UpCT-MLAA, additional PET reconstructions were performed for comparison.
For each patient without metal implants: 1) UpCT-MLAA was performed and TOF-OSEM [ 55 ] (transaxial FOV = 600 mm, two iterations, 28 subsets, matrix size = 192 × 192, and 89 slices of 2.78-mm thickness) with two -maps: 2) ZeDD-CTAC; 3) initial AC estimate of the BCNN (BpCT-AC); and 4) CTAC, for comparison. BpCT-AC is a surrogate for ZeDD-CTAC but without the use of a specialized MR sequence.
For each patient with metal implants, UpCT-MLAA was performed along with: 1) naive MLAA; 2)–4) regularized MLAA with increasing regularization parameters ( , constant over the volume); 5) TOF-OSEM with BpCT-AC; and 6) TOF-OSEM with CTAC for comparison.
Data Analysis.
Image error analysis and lesion-based analysis were performed for patients without metal implants: the average ( ) and standard deviation ( ) of the error, mean-absolute-error (MAE), and root-mean-squared-error (RMSE) were computed over voxels that met a minimum signal amplitude and/or signal-to-noise criteria [ 21 ]. Global HU and PET SUV comparisons were only performed in voxels with amplitudes > −950 HU in the ground-truth CT to exclude air, and a similar threshold of > 0.01 cm −1 attenuation in the CTAC was used for comparison of AC maps. Bone and soft-tissue lesions were identified by a board-certified radiologist. Bone lesions are defined as lesions inside the bone or with lesion boundaries within 10 mm of bone [ 56 ]. A Wilcoxon signed-rank test was used to compare the biases compared to CTAC of individual lesions.
In the cases where a metal implant was present, we qualitatively examined the resulting AC maps of the different reconstructions and quantitatively compared with reference CTAC PET. High uptake lesions and lesion-like objects were identified on the PET images reconstructed with UpCT-MLAA and separated into two categories: 1) inplane with the metal implant and 2) out-plane of the metal implant. A Wilcoxon signed-rank test was used to compare the SUV and values between the different reconstruction methods and CTAC PET.
R esults
Monte Carlo Dropout
Representative images of the output of the BCNN with the Monte Carlo Dropout is shown in Fig. 3 . The same mask used for the weight maps was used to remove voxels outside the body. The pseudo-CT images visually resemble the ground-truth CT images for patients without implants. While in patients with implants, the metal artifact region in the MRI was assigned air HU values. Nonetheless, the associated standard deviation maps highlighted image structures that the network had high predictive uncertainty. The most important of which are air pockets and the metal implant. The BCNN highlighted these regions and structures in the standard deviation image without being explicitly trained to do so.
An additional example of the uncertainty estimation is provided in Fig. 1 in the supplementary material . The input MRI had motion artifacts due to breathing and arm truncation due to inhomogeneity at the edge of the FOV. Like the metal implants, the BCNN highlighted the motion artifact region and arm truncation in the variance image without being explicitly trained to do so.
Patients Without Implants
The PET reconstruction results for the patients without implants are summarized in Fig. 4 . The RMSE is reported along with the average and standard deviation of the error as RMSE . Additional results for the pseudo-CT, AC maps, and PET data are provided in Figs. 2 – 5 in the supplementary material .
Attenuation Coefficient Map Results:
The total RMSE for the AC maps compared to gold-standard CTAC across all volumes were 3.1×10 −3 cm −1 (−5.0×10 −4 ±3.1×10 −3 cm −1 ) for ZeDD-CTAC, 3.2 × 10 −3 cm −1 (−3.8 × 10 −5 ± 3.2 × 10 −3 cm −1 ) for BpCT-AC, and 3.5×10 −3 cm −1 (−2.6 × 10 −5 ±3.5×10 −3 cm −1 ) for UpCT-MLAA-AC.
PET Images:
The total RMSE for PET images compared to gold-standard CTAC PET across all volumes were 0.023 SUV(−0.005 ± 0.023 SUV) for ZeDD PET, 0.022 SUV (−8.1×10− 5 ± 0.022 SUV) for BpCT-AC PET, and 0.027 SUV (1.5×10− 4 ± 0.027 SUV) for UpCT-MLAA PET.
Lesion Uptake and :
The results for lesion analysis for patients without implants are shown in Fig. 4 . There were 30 bone lesions and 60 soft-tissue lesions across the 16 patient datasets. The RMSE w.r.t. CTAC PET SUV and are summarized in Table I . For of bone lesions, no significant difference was found for ZeDD PET and BpCT-AC PET (p = 0.116) while PET ZeDD PET and UpCT-MLAA PET were significantly different (p = 0.037). For of soft-tissue lesions, ZeDD PET and BpCT-AC PET were significantly different (p < 0.001) while no significant difference was found between ZeDD PET and UpCT-MLAA PET (p = 0.16).
Patients With Metal Implants
Figs. 5 and 6 show the different AC maps generated with the different reconstruction processes and associated PET image reconstructions on two different radiotracers ( 18 F-FDG and 68 Ga-PSMA) and Fig. 7 shows the summary of the results. Additional results for pseudo-CT, AC maps, and PET images are provided in Figs. 6 – 11 in the supplementary material .
Metal Implant Recovery:
Figs. 5(b) (1st and 2nd columns) and 6(b) (1st and 2nd columns) show the AC map estimation results.
BpCT-AC filled in the metal implant region with air since the metal artifact in MRI appears as a signal void. Although reconstructing using naive MLAA recovers the metal implant, the AC map was noisy and anatomical structures were difficult to depict. The addition of regularization (increasing ) reduces the noise, however over-regularization eliminates the presence of the metal implant. The use of a different radiotracer also influenced reconstruction performance: the MLAA-based methods performed worse when the tracer was 68 Ga-PSMA compared to 18 F-FDG with low regularization. In contrast, UpCT-MLAA-AC recovered the metal implant while maintaining high SNR depiction of anatomical structures outside the implant region for both radiotracers. The high attenuation coefficients were constrained in the regions where high variance was measured (or where the metal artifact was present on the BpCT AC maps).
PET Image Reconstruction:
Figs. 5(b) (3rd column) and 6(b) (3rd column) show the PET image reconstruction results.
Qualitatively, the MLAA-based methods (UpCT-MLAA and Standard MLAA) show uptake around the implant, whereas BpCT-AC PET and CTAC PET show the implant region without any uptake. When compared to the NAC PET, the MLAA-based methods better match what is depicted within the implant region. Quantitatively, Table I summarizes the SUV results for voxels in-plane of the metal implant and out-plane of the metal implant.
Quantification:
Fig. 7 shows the comparisons of of lesions in-plane and out-plane of the metal implant and Tables II and III list the RMSE values for SUV and . There were six lesions in-plane and 15 lesions out-plane with the metal implants across the three patients with implants. Only UpCT-MLAA provided relatively low quantification errors on lesions both in-plane and out-plane of the metal implant.
For lesions in-plane of the metal implant, BpCT-AC PET had large underestimation of , naive MLAA PET had better mean estimation of but had a large standard deviation. The addition of light regularization to MLAA improves the RMSE by decreasing the standard deviation at the cost of increased mean error. Increasing regularization increases RMSE but reduces the bias error with increased standard deviation. UpCT-MLAA PET had the best agreement with CTAC PET. Only Naive MLAA and UpCT-MLAA had results where a significant difference could not be found when compared to CTAC ( p > 0.05).
For lesions out-plane of the metal implant, the trend is reverse for BpCT-AC PET and the MLAA methods. BpCT-AC PET had the best agreement with CTAC PET and the MLAA methods showed decreasing RMSE with increasing regularization. UpCT-MLAA had the second-best agreement with CTAC PET. No significant difference could be found for all methods when compared to CTAC ( p > 0.05).
C onclusion
We have developed and evaluated an algorithm that utilizes a Bayesian deep convolutional neural network that provides accurate pseudo-CT priors with uncertainty estimation to enhance MLAA PET reconstruction. The uncertainty estimation allows for the detection of “out-of-distribution” pseudo-CT estimates that MLAA can subsequently correct. We demonstrated quantitative accuracy in pelvic lesions and recovery of metal implants in pelvis PET/MRI.
Supplementary Material | ACKNOWLEDGMENT
The Titan X Pascal used was donated by the NVIDIA Corporation.
This work was supported in part by NIH/NCI under Grant R01CA212148; in part by NIH/NIAMS under Grant R01AR074492; in part by the UCSF Graduate Research Mentorship Fellowship award; and in part by GE Healthcare.
This work involved human subjects or animals in its research. Approval of all ethical and experimental procedures and protocols was granted by the UCSF Institutional Review Board (IRB #17-21852). | CC BY | no | 2024-01-16 23:35:03 | IEEE Trans Radiat Plasma Med Sci. 2022 Jul 6; 6(6):678-689 | oa_package/62/b1/PMC10785227.tar.gz |
|||
PMC10785288 | 38222874 | Introduction to collagen receptors
As the most abundant class of ECM proteins, collagens provide structural support for connective tissues, skin and, most importantly, bones and teeth, and can convey information about the extracellular mechanical environment via their interaction with cells using specific collagen receptors. The importance of collagen to bone development is well established; collagen synthesis is necessary for differentiation of skeletal progenitors to osteoblasts ( 1 – 4 ) and conditions that interfere with collagen synthesis or structure in vivo such as vitamin C deficiency or osteogenesis imperfecta severely disrupt bone development ( 5 – 8 ).
Until recently, it was generally assumed that bone cells interacted with the collagenous ECM exclusively through integrins, the best-known ECM receptors. Through their linkage with the cytoskeleton, integrins are major force transducers linking the ECM microenvironment with cellular functions including nuclear transcription ( 9 ). The collagen-binding integrins all have a common β1 subunit and four different alpha subunits to produce α1β1, α2β1, α10β1 and α11β1 integrins, which are all detected in bone ( 10 – 13 ). Disruption of integrin-collagen binding in cell culture using blocking antibodies to specific integrin subunits inhibits osteoblast differentiation of skeletal progenitor cells including preosteoblast cell lines and primary bone marrow cell cultures ( 12 , 14 – 16 ). Because of their shared β1 subunit, the overall requirement for collagen-binding integrins in bone was assessed in vivo using conditional inactivation of the β1 integrin gene ( Itgb1 ). Using this approach, bone phenotypes of varying severity were observed with the strongest effects of Itgb1 inactivation being associated with expression of Cre recombinase early in the bone lineage and milder phenotypes seen at later stages. For example, Itgb1 inactivation in embryonic mesenchymal progenitors using Twist2-Cre was associated with severe bone phenotypes and perinatal lethality ( 17 ). Disruption at later stages using Osx-Cre (preosteoblast stage) reduced skeletal growth, mineralization and mechanical properties, effects that became progressively milder with age while disruption of Itgb1 with Bglap-Cre had only minor effects on skeletal development ( 17 , 18 ). Similarly, Itgb1 inactivation in cartilage using Col2a1-Cre resulted in perinatal lethality in most pups, stunted cartilage growth and disruption of chondrocyte proliferation and polarity ( 19 ). Although in some cases loss of Itgb1 function severely retarded bone development, in no case was bone formation and mineralization completely disrupted. This shows that some degree of bone formation can occur in the absence of collagen-binding integrins and suggests the involvement of other collagen receptors.
Interestingly, the collagen-binding integrins appeared relatively late in evolutionary history, being first seen with the emergence of chordates ( 20 ). In contrast, collagen-like proteins are present in all metazoan species ( 21 ). The discoidin domain receptors (DDRs) are a more ancient class of cell-surface collagen binding proteins than integrins. Like collagens, they are present in most invertebrate metazoans including Caenorhabditis elegans , Drosophila melanogaster , and Hydra vulgaris and so could function as collagen receptors before the collagen-binding integrins appeared on the scene. Although functions of DDRs in invertebrates have not been extensively examined, in C. elegans , specific DDR functions have been described related to axonal guidance which also requires collagen. Since DDRs have likely functioned as collagen receptors over a much longer period of time than integrins, they may have more primordial functions related to collagen signaling [for review, see reference ( 22 )].
As will be discussed, DDRs are very different from integrins in terms of their interaction with collagens, structure, mechanism of action, tissue distribution and activity in specific cell populations. This review will specifically focus on roles of DDRs in mineralized tissues. However, it should be noted that DDRs also have non-skeletal functions in epithelial and connective tissues and have been linked to several diseases including cancer, fibrosis, and kidney disease that will not be discussed here. The reader is referred to several excellent reviews for a comprehensive treatment of these diverse DDR activities ( 23 – 26 ). | Author contributions
RF wrote and edited the article. SH wrote portions of the article and edited the entire article CG wrote portions of the article and edited the entire article. All authors contributed to the article and approved the submitted version.
The extracellular matrix (ECM) niche plays a critical role in determining cellular behavior during bone development including the differentiation and lineage allocation of skeletal progenitor cells to chondrocytes, osteoblasts, or marrow adipocytes. As the major ECM component in mineralized tissues, collagen has instructive as well as structural roles during bone development and is required for bone cell differentiation. Cells sense their extracellular environment using specific cell surface receptors. For many years, specific β1 integrins were considered the main collagen receptors in bone, but, more recently, the important role of a second, more primordial collagen receptor family, the discoidin domain receptors, has become apparent. This review will specifically focus on the roles of discoidin domain receptors in mineralized tissue development as well as related functions in abnormal bone formation, regeneration and metabolism. | DDR structure and function
Unlike integrins, which lack intrinsic kinase activity, the DDRs are collagen-activated receptor tyrosine kinases (RTKs) that share homology in their kinase domain with growth factor receptors such as the neurotrophin receptor, TrkA ( 25 , 27 , 28 ). DDRs are named for their homology to the Dictyostelium discoideum lectin, discoidin. In mammals, there are two DDR proteins, DDR1 and DDR2, which show different preferences for binding to fibrillar and non-fibrillar collagens. Both DDR1 and 2 bind type I, II, III and V fibrillar collagens. In contrast, DDR1 selectively binds basement membrane type IV collagen while DDR2 binds type X collagen ( 27 – 29 ). The overall structural features of DDR1 and 2 are summarized in Figure 1 . Starting from the N-terminus, both proteins have an extracellular DS domain, the region of homology with discoidin, a DS-like domain, a juxtamembrane domain, a single pass transmembrane domain, an intracellular juxtamembrane domain and an intracellular kinase domain. DS and DS-like domains and the kinase domain are highly conserved between DDR1 and DDR2. The DS domain distinguishes the DDRs from other RTKs and contains the binding site for triple-helical collagens ( 31 , 32 ). DDR1 exists in 5 different spliced forms while only a single DDR2 protein has been described. In DDR1, the extracellular and transmembrane domains are shared between all 5 isoforms while there are several differences in the cytoplasmic domains. Two of the 5 DDR1 splice variants lack a functional kinase domain and could potentially act as decoy receptors for the kinase-containing isoforms ( 25 ).
Like the collagen-binding integrins, the DDRs only bind to native triple-helical collagens [i.e., thermally denatured collagen cannot serve as a binding substrate ( 21 , 28 , 31 )]. DDR1 and 2 both bind a 6 amino acid sequence present in fibrillar collagens I-III, GVMGFO, where O is hydroxyproline ( 33 , 34 ). This same sequence is also recognized by two other collagen-binding proteins, Secreted Protein Acidic and Rich in Cysteine (SPARC) and von Willebrand Factor that have functions in collagen mineralization and the blood coagulation cascade, respectively ( 35 , 36 ). The GVMGFO sequence is distinct from the motif recognized by collagen-binding integrins which has the consensus sequence, GxOGEx (e.g., GFOGER or GAOGER in fibrillar collagens) ( 37 , 38 ). Interestingly, in the COL1A1, COL2A1 and COL3A1 chains of types I–III collagen, the O of GVMGFO and the G of GFOGER/GAOGER are separated by 96 amino acid residues, a finding with possible implications concerning coupling between DDRs and integrins (see Section 6 ). The interaction between the DDR2 DS domain and a triple-helical peptide containing the GVMGFO sequence has been examined at atomic resolution using x-ray crystallography ( 39 ). These studies identified an amphiphilic binding pocket for the GVMGFO sequence that is conserved between DDR2 and DDR1. One side of this pocket contains apolar amino acid residues (Trp52, Thr56, Asn175, Cys73-Cys177) while the other side contains polar residues forming a salt bridge (Arg105-Glu113, Asp69) ( 39 ).
Like other RTKs, the DDRs are ligand-activated tyrosine kinases. However, instead of responding to soluble molecules such as growth factors, the DDRs have high molecular weight triple-helical collagen as a ligand. They differ from classic RTKs in other ways as well. Instead of existing as monomers that dimerize with ligand binding, DDRs are homodimers in the unactivated state ( 40 , 41 ). Also, instead of being activated by their ligands and undergoing autophosphorylation within seconds to minutes like other RTKs, DDR phosphorylation takes hours and can often persist for days after binding collagen ( 27 , 28 ). No truly satisfactory explanation for this phenomenon has been advanced although the involvement of secondary cellular processes such as oligomerization or internalization may be important ( 40 , 42 ). Since DDRs are activated with similar kinetics by small triple-helical peptides containing the GVMGFO core binding sequence, higher order fibrillar structure of native collagen is not required for this unusual behavior ( 33 , 34 ).
Once activated, DDRs stimulate several downstream signals including ERK1/2 and p38 mitogen-activated protein kinase, phosphatidylinositol-3-kinase/AKT and NF-Kβ pathways. DDRs may also have functions separate from their kinase activities, possibly related to the control of collagen fibrillogenesis and/or orientation ( 43 , 44 ). It is not the purpose of this review to provide a comprehensive discussion of DDR2 signaling mechanisms as these have been thoroughly reviewed by others [see ref ( 23 , 25 )].
Tissue distribution of DDR1 and DDR2 in mineralized tissues
Initial evaluation of Ddr1 and Ddr2 mRNA distribution suggested that Ddr1 is predominantly expressed in epithelial tissues, smooth muscle and immune cells while Ddr2 is in connective tissues ( 45 ). More recently, tissue distribution was assessed by immunohistochemistry and in situ hybridization as well as by using a LacZ knock-in Ddr2 mutant where a bacterial β-galactosidase gene was inserted into the Ddr2 locus. The following discussion will emphasize DDR distribution in mineralized tissues.
DDR1
Although an early study that measured DDR1 binding sites in mice using DDR1 extracellular domain fused with alkaline phosphatase showed binding to all skeletal structures, skin and the urogenital tract because of their high collagen content ( 46 ), studies that actually measured the tissue distribution of DDR1 protein or mRNA are quite limited. In neonatal and adult mice, DDR1 was localized by immunohistochemistry to proliferating and hypertrophic chondrocytes of long bone growth plates, cortical and trabecular bone osteocytes, periosteum, and articular chondrocytes ( 47 – 49 ). In situ hybridization analysis was conducted in oral tissues using a Ddr1 probe ( 50 ). Consistent with an epithelial pattern of expression, highest Ddr1 mRNA levels were detected in oral epithelium including enamel organs of developing molars and basal cell layers of the oral epithelium, but low expression in ectomesenchymal tissues.
DDR2
Early in situ hybridization studies localized Ddr2 expression to tibial growth plates ( 51 ). Subsequent more detailed analysis using Ddr2 +/LacZ mice stained for β-galactosidase activity, first detected Ddr2 expression in bone rudiments at E11.5 ( 52 , 53 ). Analysis from E13.5 through adulthood showed strong staining in all developing skeletal elements in the appendicular, axial and cranial skeletons including growth plate cartilage, metaphyses, periosteum, cranial sutures and cranial base synchondroses. In general, expression was higher in cells representing earlier stages of each skeletal lineage. For example, in growth plates and synchondroses, expression was higher in resting and proliferating zone cells and lower in hypertrophic layers. Also, while Ddr2 was detected in marrow and periosteal/preosteoblast layers near forming trabecular and cortical bone surfaces, no expression was detected in osteocytes. Similar periosteal localization was reported using immunohistochemistry where DDR2 colocalized with alkaline phosphatase, a preosteoblast marker ( 54 ). Notably, this distribution is very different from most of the collagen-binding integrins (α1β1, α2β1, α11β1) that are broadly expressed in connective tissues [reviewed in ref ( 55 )]. However, there may be some overlap with integrin α10β1 which shows preferential expression in chondrocytes ( 11 , 56 ). Ddr2 +/LacZ mice were also used to examine Ddr2 expression during tooth development ( 57 ) and in the temporomandibular joint (TMJ) ( 58 ). Ddr2 was widely expressed in non-epithelial tooth structures including dental follicle and dental papilla during development and odontoblasts, alveolar bone osteoblast and periodontal ligament fibroblasts of adults. In contrast to the Ddr1 mRNA distribution described above, it was conspicuously absent from epithelial structures including ameloblasts and Hertwig’s epithelial root sheath. Strong Ddr2 expression was also detected in the TMJ articular surface of adult mice. Interestingly, at this age, Ddr2 expression in the articular surface of the knee joint was quite low suggesting differences between the fibrocartilage of the TMJ and hyaline cartilage of the knee ( 58 ).
Localization of DDR2 in skeletal progenitor cells
To gain further insight into the lineage of Ddr2 -expressing cells, Ddr2 mer–icre–mer ; ROSA26 LSLtdTomato mice were developed ( 52 , 53 ). After tamoxifen-induced recombination, Ddr2 -expressing cells are labelled with tdTomato fluorescent protein, thereby allowing these cells to be followed over time. Mice were injected with tamoxifen from P1-P4 and tdTomato+ cells were lineage-traced for up to 2 months. Initially, tdTomato+ cells had a similar distribution to that seen in Ddr2 +/LacZ mice with labelling in growth plate and synchondrosis resting zone, cranial sutures, perichondrium, trabeculae, and periosteum, but absent in more differentiated cells. Over time, tdTomato + cells appeared in proliferating and hypertrophic chondrocytes, osteoblasts and, eventually, osteocytes. Osteoclasts were not labelled. This result is what would be expected if Ddr2 was expressed in skeletal progenitor cells (SPCs) whose progeny became the mature cells of each skeletal lineage (hypertrophic chondrocytes for the cartilage lineage, osteocytes for the osteoblast lineage). Consistent with this concept, a high degree of colocalization between DDR2 and the skeletal progentitor/stem cell marker, GLI1 ( 59 , 60 ), was observed by immunofluorescence in cranial sutures, synchondroses and tibial growth plates ( 52 , 53 ). Also, CD140α + /CD51 + SPCs purified from bone marrow by FACS were enriched in Ddr2 mRNA ( 52 ).
Further evidence for DDR2 being a marker for skeletal stem cells comes from a recent study published in preprint form where DDR2 was detected in a unique cranial suture cell population ( 61 ) that could be distinguished from previously described CTSK+ suture stem cells (SSCs) ( 62 ). These DDR2 + cells have several stem cell properties including long cycling time, capacity for self-renewal after in vivo implantation, potential to differentiate to osteoblasts, adipocytes and chondrocytes, expression of several SC markers including GLI1 and capacity to generate all DDR2+ cells present in the native suture. Interestingly, conditional ablation of Ctsk -labeled SSCs using diphtheria toxin administration to iDTR ; Ctsk-Cre mice led to increased expansion of DDR2 + suture cells and suture fusion via an endochondral mechanism. The authors postulate that DDR2 + suture stem cells contribute to a novel form of endochondral ossification without hematopoietic recruitment; a third potential mechanism of bone formation.
Regulation of Ddr2 transcription
The transcriptional control mechanisms regulating DDR2 levels in bone cells are not well understood. To date they have only been examined in cell culture where Ddr2 is upregulated during osteoblast differentiation ( 63 – 65 ). One possible factor controlling this upregulation is ATF4 which, together with C/EBPβ, interacts with a C/EBP binding site at −1,150 bp in the Ddr2 promoter to stimulate Ddr2 expression and subsequent increases in osteoblast marker mRNAS ( 65 ). However, it is not known if these control mechanisms function in vivo or if other factors participate in this regulation.
Genetic models for understanding DDR functions in mineralized tissues
Experiments of nature (i.e., human genetic diseases) as well as gene inactivation mouse models have been described that, taken together, provide considerable insight into how DDRs function in bone, cartilage and the dentition.
Human loss-of-function mutations in DDR2 are associated with severe skeletal and craniofacial defects while gain-of-function mutations cause fibrosis and skull abnormalities
To date, no human mutations in DDR1 have been identified. In contrast, genetic disorders have been described associated with both loss and gain-of-function mutations in DDR2 . Spondylo-meta-epiphyseal dysplasia with short limbs and abnormal calcifications (SMED, SL-AC) is a rare autosomal recessive genetic disorder first described in 1993 that is associated with dwarfism, short limbs, reduced bone mass, abnormal skull shape including mid-face hypoplasia and hypertelorism, open fontanelles, micrognathia and tooth abnormalities ( 66 ). This disorder was subsequently mapped to chromosome 1q23, the locus of DDR2 , and shown to be caused by loss-of-function mutations in the DDR2 tyrosine kinase domain as well as mutations affecting intracellular trafficking ( 67 – 70 ). Unfortunately, individuals with this disorder rarely survive beyond childhood; atlantoaxial instability and resulting spinal cord damage is the most common cause of death ( 71 , 72 ). The short lifespan of SMED, SL-AC patients compounded with the rarety of this disorder have limited studies in humans.
A second disorder, designated as Warburg-Cinotti Syndrome, was described in 2018 and associated with putative activating mutations in the DDR2 kinase domain ( 73 ). Fibroblasts from patients exhibited high levels of DDR2 phosphorylation in the absence of collagen stimulation, suggesting that receptor activation was ligand-independent. This disorder, which is inherited in an autosomal dominant manner, is associated with progressive fibrosis, corneal vascularization, skull abnormalities and osteolysis. In view of the deleterious effects of DDR2 loss-of-function mutations on bone formation in SMED, SL-AC pateints, it is not clear why activating mutations would lead to an osteolytic phenotype. However, since only 6 patients with Warburg-Cinotti Syndrome have been described, the phenotypic variation within this disorder cannot be currently assessed.
DDR2 may also be a determinant of bone mineral density (BMD) and fracture risk in human populations. Analysis of a Chinese Han population and an American Caucasian population identified 28 SNPs in DDR2 . Of these, 3 were significantly associated with hip BMD in the Chinese, but not American population ( 74 ). Although this preliminary finding suggests that certain polymorphisms in DDR2 may be risk factors for osteoporosis, more studies are needed, particularly in diverse populations to assess the significance of these findings.
As will be described below, the phenotypic similarities between SMED, SL-AC patients and Ddr2 -deficient mice indicate that mouse models are an appropriate model for studying this disease.
Global Ddr1 and Ddr2 knockout models suggest roles in bone and tooth development
As shown in early studies, global knockout of either Ddr1 or Ddr2 resulted in dwarf phenotypes, particularly for Ddr2 -deficient mice ( 46 , 51 ). However, different bases for the observed growth deficits were proposed. In Ddr1 deficient mice, all organs were proportionally smaller suggesting an overall growth defect ( 46 ). However, no differences in growth plate size, chondrocyte proliferation or apoptosis were noted.
In contrast, initial analysis of globally Ddr2 deficient mice showed prominent growth retardation that was attributed to decreased proliferation of growth plate chondrocytes in the absence of changes in apoptosis resulting in shortened growth plates ( 51 ). A similar phenotype was subsequently observed in Ddr2 slie/slie mice, which have a spontaneous 150 kb deletion in Ddr2 that encompasses exons 1–17 to produce an effective null allele ( 75 ). A more detailed analysis of the bone phenotype of Ddr2 slie/slie mice revealed that skeletal growth defects were accompanied by large reductions in trabecular bone volume, trabecular thickness and number, changes that were attributed to reduced bone formation rate rather than stimulation of osteoclastic bone resorption ( 65 ). Similar changes in vertebral trabecular bone were also seen. However, cortical bone was only slightly affected. Interestingly, the reduction in bone mass with Ddr2 deficiency was accompanied by an increase in marrow fat. Consistent with these changes, bone marrow stromal cells (BMSCs) or calvarial preosteoblasts cultured from Ddr2 slie/slie mice exhibited defective osteoblast differentiation while differentiation of BMSCs to adipocytes was enhanced.
Changes in craniofacial morphology in Ddr1 and Ddr2 -deficient mice have been compared using a machine learning approach that was able to clearly discriminate between skulls from wildtype, Ddr1 and Ddr2 -deficient mice ( 76 ). Although Ddr1 -deficient skulls are somewhat smaller than wild type controls, they have no substantial alterations in relative skull dimensions. In contrast, skulls from Ddr2 -deficient mice are dramatically shorter in the anterior-posterior direction with a more spherical skull shape associated with increased anterior skull width as well as reduced nasal bone length. Subsequent analysis of this phenotype identified a defect in proliferation of synchondrosis chondrocytes, particularly in the intersphenoid synchondrosis, in the absence of changes in apoptosis ( 53 ). These changes were associated with a characteristic expansion of the synchondrosis resting zone, possibly related to the defective conversion of these cells into proliferating chondrocytes. Ddr2 -deficient skulls also have open fontanelles at birth, thinning of frontal bones and defects in frontal suture fusion that persist into adulthood ( 53 , 65 ).
Effects of global Ddr1 and Ddr2 inactivation on the dentition were also examined. Ddr1 -deficient mice had normal teeth, but age-dependent periodontal degeneration including alveolar bone loss was noted ( 50 ). In contrast, teeth from Ddr2 slie/slie mice had smaller roots and reduced crown/root ratio resulting in disproportionate tooth size ( 57 ). These mice also exhibited gradual alveolar bone loss over a 10-month period due to increased osteoclast activity as well as atypical periodontal ligament collagen fibrils.
Conditional Ddr1 and Ddr2 inactivation studies in bone
In addition to affecting the skeleton, global Ddr1 deficiency inhibits uterine development and embryo implantation as well as mammary epithelium development leading to defective milk production ( 46 ). Likewise, Ddr2 deficiency reduces fertility by inhibiting female and male gonadal function and steroid hormone production leading to partial sterility and interferes with certain metabolic activities ( 75 ) (see Section 8 ). Because effects of global inactivation of Ddr1 or Ddr2 are not restricted to the skeleton, specific cell-autonomous functions of these collagen receptors in bone cannot be inferred from global knockout studies. Although several early studies with osteoblast and chondrocyte cell lines and primary cultures suggested direct functions for DDR1 and 2 in bone cells ( 48 , 63 , 64 ), this issue was not resolved until recently when results of tissue-specific Ddr1 and Ddr2 knockouts were reported.
Ddr1
Chondrocyte or osteoblast-selective inactivation of Ddr1 was achieved by crossing Ddr1 fl/fl mice with Col2a1 CreERT or Col1a1 CreERT mice ( 47 – 49 ). Chondrocyte-selective knockout of Ddr1 in tamoxifen-treated Col2a1 CreERT ; Ddr1 fl/fl mice led to a 10–20 percent decrease in body weight and length and delayed formation of a secondary ossification center ( 47 ). In contrast to early reports with global Ddr1 knockouts ( 46 ), decreases in chondrocyte proliferation, apoptosis and hypertrophy were reported ( 47 ). These changes were accompanied by an approximately 20 percent change in trabecular bone volume while cortical thickness was unchanged. In addition, the chondrocyte hypertrophy markers (ColX, MMP13, RUNX2) and hedgehog pathway intermediate, IHH, all decreased. These results suggest that inactivation of Ddr1 in chondrocytes preferentially affects endochondral ossification. Results with Col1a1 CreERT ; Ddr1 fl/fl mice, where Ddr1 was preferentially inactivated in osteoblasts/osteocytes were markedly different from chondrocyte-selective knockouts ( 48 ). In this case, minimal changes in endochondral ossification or trabecular bone parameters were noted while cortical thickness was reduced by approximately 50 percent. These changes were accompanied by a loss of mechanical properties and inhibition of osteoblast markers such as RUNX2, ALPL, BGLAP and COLIA1. In a second study with Col1a1 CreERT ; Ddr1 fl/fl mice, the same group examined the consequences of Ddr1 inactivation in adults over extended periods ( 49 ). In this case, modest changes in trabecular parameters were noted together with reductions in cortical thickness, osteoblast differentiation markers and cortical bone formation rate. These changes were accompanied by increased apoptosis and autophagy markers. No craniofacial changes were described in any of these studies.
Ddr2
Conditional knockout studies with Ddr2 were informed by results of localization and lineage tracing experiments showing preferential expression of this gene in GLI1+ skeletal progenitor cells, chondrocytes, and osteoblasts (see Sections 3.2 , 3.3 ). To determine functions of Ddr2 in these cells, Ddr2 fl/fl mice were crossed with Gli 1 CreERT , Col2a1 Cre or Bglap Cre mice and resulting long bone and craniofacial phenotypes examined ( 52 , 53 ). Inactivation of Ddr2 in Gli1 -expressing cells, induced by injecting neonatal Gli1 CreERT ; Ddr2 fl/fl mice with tamoxifen, resulted in essentially the same phenotype seen in Ddr2 slie/slie mice. Dwarfism was observed in both males and females, and this was associated with an approximately 12 percent reduction of growth plate length at P14. In addition, severe defects in endochondral bone formation were observed, particularly in males where trabecular BV/TV was reduced by approximately 50 percent. Associated reductions in trabecular number and thickness and increased trabecular spacing were also seen at 3 months. However, cortical BV/TV was not affected. The craniofacial phenotype of Gli1 CreERT ; Ddr2 fl/fl mice was also essentially identical to Ddr2 slie/slie mice; anterior-posterior skull length was reduced with an associated increase in anterior skull width. Mutants also exhibited frontal bone thinning and shortened nasal bones ( 53 ). Also like global knockouts, the anterior portion of frontal sutures failed to mineralize in most mice.
The phenotype of Col2a1 Cre ; Ddr2 fl/fl mice was similar to Gli1 CreERT ; Ddr2 fl/fl and Ddr2 slie/slie mice with the important exception that no defects in suture fusion were observed. Although it has been proposed that changes in growth of the cranial base can affect suture fusion ( 77 ), this is clearly not an adequate explanation for effects of Ddr2 inactivation on frontal sutures since Col2a1 Cre ; Ddr2 fl/fl mice had the same cranial base growth defects seen in Gli1 CreERT ; Ddr2 fl/fl mice. Based on this result, it was concluded that functions of Ddr2 in synchondrosis endochondral bone formation are independent from its functions in cranial sutures. Consistent with the observed reduction in tibial bone formation, mRNA levels of osteoblast and hypertrophic chondrocyte markers were all reduced in Col2a1 Cre ; Ddr2 fl/fl mice. These changes were accompanied by decreased mRNA levels of the hedgehog pathway intermediates, Ihh and Gli1 . Since defects in Hh signaling were also noted with conditional Ddr1 knockout ( 47 ) ( Section 4.3.1 ), this pathway may be a common target for DDRs.
Although Ddr2 was expressed in osteoblasts on trabecular and periosteal surfaces, it probably does not have a major function in mature osteoblasts since Bglap Cre ; Ddr2 fl/fl mice were essentially identical to wild type control mice. Because this Cre is mainly active in mature osteoblasts and, possibly, osteocytes, it is still possible that Ddr2 may have functions in earlier stages of the osteoblast lineage.
Overall, Ddr2 conditional knockout studies support the concept that this gene functions in earlier stages of bone formation (i.e., in Gli1 CreERT -positive skeletal progenitor cells and Col2a1 Cre -positive resting zone and proliferative chondrocytes) rather than in terminally differentiated osteoblasts or hypertrophic chondrocytes. Two cell culture studies reinforce this conclusion ( 52 ). In the first, E12.5 limb buds from Ddr2 fl/fl mice were used to prepare micromass cultures enriched in chondro-osteo progenitors that were treated with control or Cre adenovirus before growth in chondrogenic medium. Ddr2 inactivation strongly inhibited chondrogenesis as measured by Alcian blue staining or expression of chondrocyte markers. In the second study, CD140α + /CD51 + SPCs were prepared from Ddr2 fl/fl mice and grown in osteogenic medium after treatment with Cre adenovirus. In this case, Ddr2 inactivation strongly inhibited osteoblast differentiation (mineralization and expression of osteoblast markers).
Possible functions of Ddr2 in osteoclasts
The studies described above all focused on functions of Ddr2 in chondro-osteo lineage cells which form chondrocytes, osteoblasts and osteocytes. However, there is still some controversy regarding possible Ddr2 functions in osteoclasts. On one hand, lineage tracing studies with Ddr2 mer–icre–mer ; ROSA26 LSLtdTomato mice did not show colocalization of the tdTomato label with TRAP-positive osteoclasts ( Section 3.3 ) and globally Ddr2 deficient mice ( Ddr2 slie/slie mice) did not have any detectable changes in bone resorption markers or osteoclast differentiation capacity ( Section 4.1 ). On the other hand, evidence was presented that DDR2 has a suppressive effect on osteoclast formation in cell culture models ( 78 ). DDR2 protein and mRNA were detected at low levels in the RAW264.7 macrophage cell line and primary cultures of bone marrow macrophage and these levels were further reduced with in vitro induction of osteoclast formation. Also, overexpression of Ddr2 in RAW264.7 was shown to inhibit osteoclast induction while shRNA knockdown of Ddr2 further stimulated this process. Furthermore, adenovirus-mediated overexpression of Ddr2 in the femur marrow cavity partially reversed osteoporosis in ovariectomized mice, a phenotype that is largely due to osteoclast activation. These studies suggest that Ddr2 can function in the monocytic lineage to suppress osteoclastogenesis. Lastly, in a recent study Ddr2 fl/fl mice were crossed with LysM Cre mice to conditionally inactivate Ddr2 in myeloid lineage cells ( 79 ). The resulting animals had a hyperinflammatory phenotype after exposure to either collagen antibody-induced arthritis or a high-fat diet. After arthritis induction, mice had increased ankle inflammation, elevation of inflammatory markers, increased bone resorption and increased osteoclast surface per bone surface as well as an approximately 15 percent decrease in bone mineral density. Also, evidence was presented that loss of DDR2 increased macrophage repolarization from an M2 to M1 phenotype resulting in enhanced inflammation. However, this study did not look for changes in bone density in the absence of an inflammatory stimulus. Nevertheless, this work supports a role for DDR2 in the suppression of osteoclastogenesis through its inhibitory actions on monocytic osteoclast precursors. However, it is still not clear why, in previous studies, changes in bone resorption markers were not detected in Ddr2 slie/slie mice or why osteoclasts were not detected as part of the DDR2 lineage ( 52 , 65 ). It is possible that effects on bone resorption in the absence of induced inflammation may not be large enough to affect bone mass or, alternatively, that in globally Ddr2 deficient mice, interference with other DDR2 dependent processes may compensate for effects on osteoclastogenesis. Another possibility would be that DDR2 is not expressed in the osteoclast lineage and does not have a direct function in these cells, but rather modulates effects of macrophage on osteoclastogenesis. Studies where Ddr2 is more selectively inactivated only in osteoclasts (for example, using Ctsk-Cre or TRAP-Cre ) may be necessary to resolve this issue ( 80 ).
DDR2-dependent changes in osteoblast gene expression
A consistent finding from Ddr2 knockout studies is that osteoblast differentiation and associated expression of osteoblast marker genes is suppressed. A limited number of studies have investigated the basis for this suppression. Because of its central role as a master transcriptional regulator of bone formation, studies to date have focused on RUNX2. This transcription factor is expressed at early times during bone development coincident with the formation of cartilage condensations and has roles in both hypertrophic cartilage formation as well as osteoblast differentiation [for review ( 81 )]. RUNX2 activity is subject to several controls including phosphorylation by ERK1/2 and p38 mitogen-activated protein kinases (MAPKs) ( 82 ). Both MAPKs are important for bone formation as demonstrated by in vivo gain and loss-of-function studies ( 83 , 84 ). Once activated, MAPKs translocate to the nucleus where they bind and phosphorylate RUNX2 on the chromatin of target genes ( 85 ). MAPKs phosphorylate RUNX2 on several serine residues, the most important being Ser301 and Ser319 ( 86 ). Phosphorylated RUNX2 recruits specific histone acetyltransferases and methylases to chromatin resulting in increased H3K9 and H4K5 acetylation and H3K4 di-methylation, histone modifications associated with transcriptional activation, as well as decreased H3K9 mono-, di and tri-methylation, histone marks associated with repression ( 85 ). These changes open chromatin structure to allow RNA polymerase II to bind and initiate transcription of osteoblast-related genes. RUNX2 phosphorylation and MAPK activity are obligatory for these changes since transfection of cells with a phosphorylation-resistant S301,319A mutant RUNX2 or treatment with MAPK inhibitors blocks transcription.
Since both ERK1/2 and p38 MAPKs are known downstream responses to DDR2 activation ( 25 ), it was hypothesized that this pathway could explain the observed stimulatory effects of DDR2 on osteoblast gene expression. This concept has been tested in cell culture studies with osteoblast cell lines as well as in osteoblasts from Ddr2 -deficient mice ( 64 , 65 ). In early studies with osteoblast cell lines and primary BMSC cultures, DDR2 was shown to stimulate osteoblast differentiation through a pathway involving ERK/MAPK activation and RUNX2 phosphorylation ( 64 ). Ddr2 shRNA inhibited differentiation while overexpression was stimulatory. These changes were paralleled, respectively, by increased or decreased ERK/MAPK activity, RUNX2 phosphorylation and transcriptional activity. Significantly, effects of Ddr2 shRNA knockdown could be overcome by transfecting cells with a phosphomimetic Runx2 S301,319E mutant where replacement of alanine with glutamate mimics a phosphate group. In separate studies referenced in Section 4.1 ( 65 ), calvarial preosteoblasts or BMSCs isolated from Ddr2 slie/slie mice were found to be deficient in ability to undergo osteoblast differentiation while BMSCs from these mice exhibited enhance adipogenic differentiation. The reduced osteoblast differentiation in Ddr2 -deficient cells was directly related to reduced ERK/MAPK activity and RUNX2-S319 phosphorylation and was rescued by transfection with the RUNX2 S301/319E mutant described above. The ability of DDR2 to stimulate ERK/MAPK activity may also explain the increase in marrow fat observed in Ddr2 slie/slie mice. In addition to phosphorylating RUNX2, ERK1/2 can phosphorylate the adipogenic transcription factor, PPARγ, on Ser112. In this case, however, phosphorylation inhibits transcriptional activity. By preventing this inhibitory phosphorylation, Ddr2 knockout would be expected to restore PPARγ activity to permit formation of marrow fat. Consistent with this interpretation, transgenic mice containing a phosphorylation-resistant S112A PPARγ mutant have increased marrow fat and reduced bone mass ( 87 ).
Requirement for DDR2 in bone regeneration
Consistent with the marked effects of Ddr2 deficiency on bone development, inactivation of this gene was also shown to inhibit bone regeneration. Two regeneration models were examined, a calvarial bone defect and a tibial fracture ( 88 , 89 ). For the calvarial model, a 0.5 mm burr hole defect was generated in wild type or Ddr2 slie/slie mice and regeneration was examined for increasing times up to 12 weeks. In wild type mice, this type of defect was completely healed after 4 weeks while no bone bridging was seen in mutant mice even after 12 weeks. Ddr2 , which was expressed in sutures and periosteal cells before injury, was detected in the injury site within 3 days and expanded during the healing process. Also, inactivation of Ddr2 in calvarial cells in culture reduced osteoblast differentiation. For the fracture model, a mid-shaft tibial fracture was created in wild type or Ddr2 slie/slie mice and fracture healing was monitored for 3 weeks. In this case, Ddr2 -deficient mice were unable to form complete unions at the fracture site as measured by Radiographic Union Score Tibia (mRUST) ( 90 ).
Functions of DDR2 in cartilage matrix organization and relationship to ECM stiffness
In the studies described above, the reduced linear growth of long bones and skulls in Ddr2 -deficient mice was attributed to proliferation defects in growth plate and synchondrosis chondrocytes in the absence of changes in apoptosis ( 52 , 53 ). Interestingly, an examination of chondrocyte morphology revealed that the normal organization of these cells into columns was disrupted with Ddr2 inactivation. This effect was seen in long bone growth plates but was particularly striking in cranial base synchondroses where the central resting zone was greatly expanded with widely separated disorganized cells ( 52 , 53 ). In some cases, chondrocytes actually shifted their orientation by 90 degrees to form an ectopic hypertrophic zone at right angles to the normal plane of synchondrosis organization. These changes were accompanied by loss of chondrocyte polarity as measured by disruption of the normally consistent orientation of GM130, a Golgi apparatus marker, relative to the nucleus and anterior-posterior axis of the skull. This may explain the proliferation defect seen in chondrocytes of Ddr2 -deficient mice since disruption of GM130 orientation is known to impair spindle assembly and cell division ( 65 ). The relevance of these findings to human physiology is emphasized by the observation that collagen matrix distribution is also disrupted in growth plate cartilage from SMED, SL-AC patients ( 66 ).
How might DDR2 affect chondrocyte polarity? One possibility is that it is necessary for collagen matrix organization and fibril orientation which would subsequently affect chondrocyte orientation. Examination of the type II collagen distribution in both growth plates and synchondroses by immunofluorescence microscopy revealed a shift from a uniform distribution in the territorial matrix next to chondrocytes and the extraterritorial matrix between cell clusters in wild type mice to an uneven distribution restricted to the pericellular space adjacent to chondrocytes in mutants ( 53 ). These changes were accompanied by loss of type II collagen fibril orientation as measured by second harmonic generation (SHG) microscopy. This analysis detected a dramatic shift from a highly oriented matrix (high anisotrophy) in synchondroses of wild type mice to a disorganized matrix (low anisotropy) in mutants where fibrils had a randomized orientation ( 53 ). Although primary cilia have been related to cell polarity and collagen orientation in other systems ( 91 ), regulation of this important organelle by DDR2 has not been reported.
Another consequence of DDR2 maintaining collagen fibril orientation is an increase in overall ECM stiffness. Although this has not been examined during bone development, there are several examples in other experimental systems. For example, DDR2 in breast cancer-associated fibroblasts (CAFs) increases tumor stiffness by organizing type I collagen fibrils ( 92 ). Also, at sites of trauma-induced heterotopic ossification, DDR2 increases collagen fibril orientation as measured by SHG ( 93 ) (also see Section 7.3 ). In both cases, evidence was presented that DDR2 functioned in concert with collagen-binding β1 integrins to stimulate, on one hand, tumor metastasis to the lungs or, on the other, ectopic bone formation. As noted in Section 2 , fibrillar collagens I–III contain binding sites for both DDRs and integrins always separated by 96 amino acid residues. This characteristic spacing may allow collagen to simultaneously regulate both these receptors. For example, in breast tumor metastasis, DDR2 was found to stimulate CAF-mediated mechanotransduction by increasing integrin activation in response to collagen. This was accomplished by stimulating RAP1-mediated Talin1 and Kindlin2 recruitment to integrins in focal adhesions ( 92 ). Also, in trauma-induced heterotopic ossification, DDR2 was necessary for full activation of integrin-dependent signals such as focal adhesion kinase (FAK) activation as well as nuclear levels of the Hippo pathway intermediate, TAZ, and its downstream targets ( 93 ).
Involvement of DDRs in abnormal ossification
Given the involvement of DDRs in normal bone formation, it is not totally surprising that they are also involved when this process goes awry. It this section, DDR involvement in vascular calcification, osteoarthritis and heterotopic ossification will be discussed.
Vascular calcification
Initiated by insults such as high levels of circulating LDL cholesterol, diabetes or chronic kidney disease, vascular calcification is a key event in advanced atherosclerosis. Calcium phosphate crystals can be deposited either in the subepithelial intima of blood vessels (intimal calcification) or in the smooth muscle-rich media (medial calcification) ( 94 ). This latter process shares many similarities with normal bone formation. It is initiated by differentiation of vascular smooth muscle cells or SMC progenitors into osteochondroprogenitor cells which form bone-like structures in arteries through a process that mimics endochondral bone formation as indicated by formation of cartilage that subsequently is converted into a bone-like structure ( 95 ). Like normal bone formation, this process requires interactions of progenitor cells with type I collagen and is mediated by the master transcriptional regulator of bone formation, RUNX2 ( 96 , 97 ). Vascular calcification can be induced in mice by feeding LDL receptor-deficient animals ( Ldlr −/− mice) a high fat, high cholesterol diet. Breeding a Ddr1 -null allele into Ldlr −/− mice resulted in animals that were resistant to developing vascular calcification ( 97 ). Subsequent analysis showed that calcification was inhibited via a mechanism involving suppression of phosphatidyl inositol-3-kinase/AKT and p38/ERK MAP kinase signaling and inhibition of RUNX2 phosphorylation and activation ( 98 ). More recent studies extended this work by showing that DDR1 up-regulates its own synthesis in response to the stiffness of the matrix environment around VSMCs. This is accomplished by stimulating the nuclear translocation of the Hippo pathway intermediates, YAP and TAZ, to increase Ddr1 transcription and subsequent mineralization ( 99 ). This may explain the known relationship between arterial stiffening and acceleration of vascular calcification ( 100 ).
Osteoarthritis
Osteoarthritis (OA), a primary indicator for joint degeneration, is characterized by cartilage degradation, osteophyte formation and joint mineralization ( 101 ). OA can occur in fibrocartilage of the temporomandibular joint (TMJ) or in hyaline cartilage of major joints such as the knee. OA in hyaline cartilage generally increases with age. In contrast, TMJ OA has an earlier onset ( 102 , 103 ). Interactions between chondrocytes and the ECM of hyaline cartilage and fibrocartilage may be key factors for understanding OA pathogenesis in these two tissues. TMJ fibrocartilage extracellular matrix mainly contains type I collagen while type II collagen predominates in hyaline joints ( 104 ). Both DDR1 and DDR2 are involved in OA etiology although they may function through different mechanisms. Unlike DDR1, which is broadly but weakly activated by collagens I to IV, DDR2 is strongly activated by types I and III collagen of TMJ fibrocartilage but is less responsive to type II collagen ( 28 ). Ddr2 is expressed at low levels in healthy adult hyaline cartilage joints but is abundant in TMJ fibrocartilage ( 58 ). Thus, Ddr2 is normally expressed at highest levels in an ECM environment that is conducive to its activation. Consistent with its distribution, Ddr2 is required for normal TMJ formation; global Ddr2 inactivation disrupts TMJ development beginning in neonates which show an initial delay in condyle mineralization that persist in adults leading to eventual joint degeneration and subchondral bone loss ( 58 ). In contrast, knee joints, which are composed of hyaline cartilage, are not affected by Ddr2 deficiency. Ddr1 global knockout mice, in contrast, exhibit a spontaneous rapid-onset TMJ OA that is seen by 9 weeks without involvement of other joints ( 105 ). The authors of this study proposed that induction of TMJ OA is related to the observation that loss of DDR1 was accompanied by a compensatory up-regulation of DDR2. This is then activated by the type I collagen in TMJ fibrocartilage to induce OA. It is not known if these changes are seen in Ddr1 -deficient neonates although a separate study reported TMJ abnormalities in mice as young as 4 weeks ( 50 ).
DDR2 has also been related to OA in hyaline cartilage joints. In this case, the normally low levels of DDR2 in adults are increased with injuries such as trauma or surgical destabilization of the medial meniscus, which subsequently induce OA ( 106 ). In this case, globally Ddr2 -deficient mice or mice where Ddr2 in selectively inactivated in articular cartilage are resistant to surgically-induced OA indicating that DDR2 is required for OA induction in this tissue ( 107 , 108 ). However, overexpressing Ddr2 in hyaline cartilage does not lead to spontaneous OA formation unless hyaline cartilage ECM is altered by trauma ( 106 , 108 ). It has been proposed that trauma-induced damage to the ECM may disrupt the pericellular matrix around chondrocytes and allow them to interact with type II collagen fibrils resulting in DDR2 activation and OA ( 107 ).
Heterotopic ossification
Heterotopic ossification (HO) is a debilitating condition that occurs after many traumatic injuries. In HO, PDGFRα+ connective tissue cells present in soft tissue adjacent to the injury site change their differentiation trajectory to form ectopic cartilage and bone ( 109 ). Ddr2 has been recently shown to play a role in the pathogenesis of HO ( 93 ). Using single cell RNA sequencing, Ddr2 was discovered to be highly expressed by PDGFRα+ cells, that form the major cell lineages involved in HO formation. In HO, both DDR2 and phospho-DDR2, a marker of active DDR2, were shown to be significantly upregulated in PDGFRα+ cells within the tendon, peritendon, and soft tissue areas surrounding the HO site. Interestingly, DDR2 mediates HO formation after injury, as both Ddr2 slie/slie mice (global knockout) and tamoxifen-treated Pdgfa-Cre ER ; Ddr2 fl/fl mice (conditional knockout in progenitor cells) display significant reductions in Sox9 expressing chondrocytes, safranin O labeled cells and reductions in ectopic bone formation due to extracellular matrix disorganization and FAK/YAP/TAZ dysregulation (described in Section 5 ). This study highlights how extracellular matrix alignment can have profound effects on HO progression and how DDR2 is an important regulator of this process.
Metabolic effects of Ddr2 deficiency and relationship to bone metabolism
In addition to inhibiting skeletal growth, global Ddr2 deficiency also affects metabolism. For example, Ddr2 slie/slie mice have elevated blood glucose levels, reduced body fat and increased lean body mass ( 75 ), elevated levels of circulating adiponectin and decreased serum leptin ( 65 ). It is not known if there is a relationship between these metabolic changes and the bone phenotype of these mice. However, as discussed in Sections 4.1 , 4.3 , the decrease in bone mass in Ddr2 slie/slie mice is paralleled by an increase in marrow fat, a change that may be related to the reduced ERK/MAPK activity in mutant mice. The consequences of this reduced MAPK activity would include suppression of RUNX2 and PPARγ phosphorylation, decreased osteoblast and increased marrow adipocyte gene expression and differentiation. Since marrow adipocytes are a major source of serum adiponectin ( 110 ), the increase in marrow adipocytes in Ddr2 slie/slie mice may explain the observed increase in serum adiponectin. However, specific knockout of the Adipoq gene in marrow adipocytes using a recently described double recombination strategy ( 111 ) would be necessary to definitively test this hypothesis.
Interestingly, Ddr2 is expressed in adipocytes. Early studies suggested possible direct effects of DDR2 on these cells such as suppression of insulin stimulated tyrosine phosphorylation of the insulin receptor in the 3T3-L1 adipocyte cell line ( 112 ). More recently, direct effects of DDR2 on adipocytes in vivo were examined using Adipo Cre ; Ddr2 fl/fl mice, where Ddr2 is inactivated in peripheral as well as marrow fat ( 113 ). In this study, mutant mice were protected from high fat diet-induced weight gain, a response that was attributed to decreased adipocyte size. Significantly, these animals also had a high bone mass phenotype accompanied by increases in both bone formation rate and resorption. These changes were explained by a DDR2-specific repression of adenylate cyclase 5 (Adcy5) in adipocytes that is removed in mutant mice leading to increased cAMP production and lipolysis in marrow adipocytes. The released fatty acids in the marrow cavity then promote increased oxidative metabolism in osteoblast leading to increased osteoblast and osteoclast activity. Therefore, by modulating lipolysis in adipocytes, DDR2 can indirectly control bone formation. This mechanism may complement the more direct effects of DDR2 on skeletal progenitor cells described in Section 4.3.2 .
Summary and future perspectives
The study of DDR functions in bone is a relatively new research area and many questions remain about what these collagen receptors do and how they do it. As shown in this review, both DDR1 and DDR2 have functions in mineralized tissues with DDR2 perhaps having a greater role under physiological conditions. However, clear functions for DDR1 are also seen, particularly in pathological conditions such as vascular calcification.
Although tissue distribution studies, particularly for DDR1, are incomplete, the original conclusion that DDR1 functions in epithelia while DDR2 is in connective tissues may need revision, particularly for DDR1, which has clear functions in connective tissues like cartilage and bone. More detailed DDR1 localization and lineage tracing studies will be required to more fully understand where this collagen receptor functions. The observation that DDR2 is present in GLI1-positive skeletal progenitor cells of cranial sutures and, possibly, cartilage where it controls cell proliferation and differentiation to chondrocytes and osteoblasts is of particular interest. These studies suggest that DDR2, together with collagen binding integrins, allows certain classes of skeletal progenitor/stem cells to sense their ECM environment and modulate their differentiation state according to ECM stiffness and mechanical loads. As the more ancient of the two collagen receptors, the DDRs were likely complemented by the newly emerging collagen-binding integrins when the vertebrate skeleton first evolved so that these two receptors now work in concert. Another intriguing area is the possible function of DDR2 in osteoimmunology where it may modulate activities of various myleloid lineages to control inflammation and bone resorption.
Although conditional knockout studies showed that DDR2 functions in skeletal progenitor cells and chondrocytes, little is known about its actual mechanism of action in these tissues. Current, albeit incomplete, knowledge in this area is summarized in Figure 2 . Some of its activities may be explained by modulation of MAP kinases which subsequently control osteogenic and adipogenic transcription factors through phosphorylation. However, this is likely only part of the story. The dramatic effects of DDR2 on collagen fibril orientation, matrix stiffness and cell polarity may also be an important part of an overall mechanism that still needs to be discerned. By modulating matrix stiffness-associated pathways including the Hippo pathway, DDR2 and integrins may work together to control stiffness-associated nuclear changes and transcription. These matrix signals may also modify the response of cells to soluble signals coming from growth factors or morphogens. All these topics are clearly fruitful areas for future investigations.
Recent discoveries on DDR function may also have important implications for the treatment of disease. For example, the demonstrated role of DDR1 in vascular calcification and of DDR2 in osteoarthritis and heterotopic ossification suggest that specific DDR2 inhibitors already under development could be used to treat these disorders ( 114 ). Also, the recent discovery that DDR2 is required for skeletal regeneration may open new directions for therapy through the development of either DDR-activating tissue engineering scaffolds or other treatments that modify DDR activity.
Clearly, the study of DDRs in bone will continue to be a growing area of musculoskeletal research that holds much promise for exciting future discoveries. | Funding
Research from the authors laboratory described in this article was supported by NIH/NIDCR grants DE11723, DE029012, DE029465, DE030675, Department of Defense grant PR190899, research funds from the Department of Periodontics and Oral Medicine, University of Michigan School of Dentistry and the Michigan Musculoskeletal Health Core Center (NIH/NIAMS P30 AR069620). | CC BY | no | 2024-01-16 23:35:04 | Front Dent Med. 2023 May 11; 4:1181817 | oa_package/e0/95/PMC10785288.tar.gz |
||||
PMC10785745 | 38223904 | Introduction
Glaucoma, the second-most-common cause of blindness worldwide, is a neurodegenerative disease that culminates in the irreversible loss of retinal ganglion cells (RGCs) ( 1 ). Elevated levels of intraocular pressure (IOP) is one of the major risk factors for glaucoma disease progression and associated vision loss ( 2 ). As such, lowering IOP is currently the only effective treatment for glaucoma, utilizing both pharmacological and surgical methods. IOP is a function of aqueous humor (AH) production and its drainage through both conventional and unconventional (uveoscleral) outflow pathways. The conventional pathway is the source of resistance to unimpeded AH outflow, which determines IOP and is regulated by the cells that inhabit this pathway: the trabecular meshwork (TM), Schlemm’s canal (SC), and distal venous vessels ( 3 – 5 ).
The major source of resistance in the TM is the region known as the juxtacanalicular tissue (JCT), which is adjacent to the inner wall of SC ( 6 , 7 ). The JCT is made up of ECM materials interspersed with TM cells, which have long cellular processes that communicate with both the inner wall (IW) endothelial cells and trabecular meshwork cells in the corneoscleral meshwork region ( 8 – 11 ). The ECM in this region is hydrated, allowing AH to move through the JCT and into the SC lumen. The ECM here is incredibly dynamic and composed of many different molecules that can influence outflow resistance, thereby regulating IOP. In fact, the continual remodeling of the ECM is comparable to a healing wound, and is thought to be part of an adaptive mechanism for IOP fluctuations ( 12 ). In support of this idea, the ECM components change in response to changes in IOP that impact the preferential flow pathways for AH ( 4 , 13 ). Thus, flow through the TM is not uniform, but consists of low- and high-flow regions in which the TM expresses different ECM-related genes ( 14 – 16 ). In the glaucomatous TM, ECM dynamics and homeostasis is compromised, causing an excess of ECM to build up in the JCT region, creating increased resistance to outflow ( 16 – 18 ). The trigger for this ECM dysregulation is currently unknown and greater understanding of this is important to determine glaucomatous pathophysiology.
As in cancer cells that robustly regulate and maintain ECM, one likely mechanism for ECM regulation in the TM is via extracellular vesicles (EVs) [as reviewed in previous publications ( 19 , 20 )]. The EVs are nanoparticles that are released by every cell type and have a multitude of functions, one of which is ECM regulation ( 21 , 22 ). As such, the EVs are released from TM cells in vitro , and from explanted TM tissue and are abundant in aqueous humor ( 21 , 23 – 26 ). Moreover, the EVs play a role in ECM regulation by delivering both ECM protein cross-linkers and ECM proteases, such as matrix metalloproteinases (MMPs) and tissue inhibitors of metalloproteinases (TIMPs), and also by binding partially digested ECM materials ( 20 , 27 ). We have previously shown that small extracellular vesicles (sEVs) released from TM cells in organ cultured explants bind fibronectin, and this process was disrupted following treatment with glucocorticoids ( 21 ). Significantly, patients exposed to high levels of glucocorticoids to treat retinal disease have a high incidence of ocular hypertension due to increased ECM materials in the TM ( 28 , 29 ). Based on these studies, we hypothesized that the ECM binding profile and/or capacity of sEVs released from glaucomatous TM cells is altered compared with sEVs released from TM cells isolated from healthy eye donors. In this study, we compare the proteomic cargo of sEVs released from glaucomatous and non-glaucomatous TM cells in vitro . | Methods
Human trabecular meshwork cell culture
De-identified whole globes or corneal rims from human donors were obtained from Miracles in Sight (Winston-Salem, NC, USA) in accordance with the Declaration of Helsinki on research involving human tissue and with the approval of the Duke University Health System Institutional Review Board. The demographic characteristics of the human donors that contributed to this study are provided in Table 1 . Human TM cells were isolated using a blunt dissection technique, characterized, and cultured in our laboratory as previously described ( 30 , 31 ). For this study, nine separate TM strains were used: six isolated from non-glaucomatous human tissue and three isolated from glaucomatous human tissue between passages 3 and 6. For sEV collection, TM monolayers were differentiated in DMEM supplemented with 1% fetal bovine serum (FBS; Thermo Fisher Scientific, Waltham, MA, USA) and maintained by culturing in DMEM supplemented with 1% exosome-depleted FBS (Thermo Fisher Scientific) for approximately 90 days. The conditioned media from TM monolayers were collected every 48 h during media exchanges and stored at −80°C before further processing.
Small extracellular vesicle isolation
For this study, sEVs were isolated using a gentle double iodixanol (OptiPrep TM ; Sigma, USA) cushion ultracentrifugation, followed by an iodixanol cushioned-density gradient ultracentrifugation (C-DGUC), as described by Li et al. ( 32 ). In brief, collected conditioned media were centrifuged at 2,000 g for 10 minutes to remove cellular debris. The supernatants were centrifuged at 10,000 g for 30 minutes at 8°C. The resulting supernatant was carefully collected and layered onto a cushion of 60% iodixanol medium. Sedimented EVs were extracted from the iodixanol cushion interfaces and diluted using particle-free Dulbecco’s phosphate-buffered saline (PBS), layered over a 60% iodixanol cushion, and centrifuged at 100,000 g . The iodixanol cushion containing isolated sEVs was collected and used as the base layer for the iodixanol gradient. Density gradient ultracentrifugation was performed, and the medium was collected from the top in 1-mL increments to create 12 fractions. The fractions were then diluted in PBS and washed. The supernatant was discarded, and the remaining pellet containing purified EVs was resuspended in lysis buffer (100 mM tris, pH 6.8, 2% sodium dodecyl sulfate (SDS) and stored at −80°C until further analysis. The sEVs are typically found in fractions in a density range of approximately 1.07 g/mL–1.11 g/mL ( 33 ); however, we also detected the enrichment of EV markers in denser fractions and opted to assess these as a separate dataset.
Sample preparation and LC-MS/MS analysis
For each sample, approximately 8 μg of total protein was used to prepare peptide mixtures for proteomic profiling. The proteins were cleaved with the trypsin/endoproteinase Lys-C mixture (V5072; Promega, Madison, WI, USA) using the paramagnetic beads-based method ( 34 ). Each digest was dissolved in 15 μL of 1%/2%/97% (by volume) of trifluoroacetic acid/acetonitrile/water solution, and 5 μL was injected into a 5 μm×5 mm PepMap TM Neo C18 column (Thermo Scientific TM ) in 1% acetonitrile in water for 3 minutes at a rate of 5 μL/minute. The analytical separation was then performed using an EasySpray PepMap Neo 75 μm × 150 mm, 2 μm, C18 column (Thermo Scientific) over 90 minutes at a flow rate of 0.3 μL/minute at 35°C using the Vanquish TM Neo ultra-high-performance liquid chromatography (UHPLC) system (Thermo Scientific). The 5%–35% mobile phase B gradient was used, where phase A was 0.1% formic acid in water and phase B was 0.1% formic acid in 80% acetonitrile. The peptides separated by LC were introduced into the Q Exactive TM HF Orbitrap mass spectrometer (Thermo Scientific) using positive electrospray ionization at 1900 V and with a capillary temperature of 275°C. The data collection was performed in the data-dependent acquisition (DDA) mode with 120,000 resolutions (at m/z 200) for MS1 precursor measurements. The MS1 analysis utilized a scan from 375 m/z to 1500 m/z, with a target automatic gain control (AGC) value of 1.0e6 ions, the radiofrequency (RF) lens set at 30%, and a maximum injection time of 50 ms. Advanced peak detection and internal calibration (EIC) were enabled during data acquisition. The peptides were selected for MS/MS using charge state filtering, monoisotopic peak detection, and a dynamic exclusion time of 25 seconds with a mass tolerance of 10 ppm. MS/MS was performed using higher-energy C-trap dissociation (HCD) with a collision energy of 30% ± 0.5%, detection in the ion trap using a rapid scanning rate, an AGC target value of 5.0e4 ions, a maximum injection time of 150 ms, and ion injection for all available parallelizable time enabled.
Protein identification and quantification
For label-free relative protein quantification, raw mass spectral data files (.raw) were imported into Progenesis QI for Proteomics 4.2 software (Nonlinear Dynamics) for alignment of technical replicate data and peak area calculations. The peptides were identified using Mascot version 2.5.1 (Matrix Science) for searching the UniProt 2019-reviewed human database, which contains 20,243 entries. The Mascot search parameters were as follows: 10 ppm mass tolerance for precursor ions; 0.025 Da for fragment-ion mass tolerance; one missed cleavage by trypsin; a fixed modification of carbamidomethylation of cysteine; and a variable modification of oxidized methionine. Only the proteins identified with two or more peptides (i.e., Mascot scores > 15 for a peptide and > 50 for a protein corresponding to a protein confidence of p < 0.05), were included in the protein quantification analysis. To account for variations in experimental conditions and amounts of protein material in individual LC-MS/MS runs, the integrated peak area for each identified peptide was corrected using the factors calculated by the automatic Progenesis algorithm, utilizing the total intensities for all peaks in each run. The values representing protein amounts were calculated based on the sum of ion intensities for all identified constituent non-conflicting peptides. Protein abundances were averaged across the two duplicate runs for each sample.
PANTHER analysis of the most abundant proteins
To assess the different protein classes, we used the Protein Analysis Through Evolutionary Relationships (PANTHER) software. We pooled the proteomic datasets from each biological replicate and sorted these by abundance. We then filtered out duplicate proteins to obtain the top 100 proteins from the pooled datasets. These were then inputted into the PANTHER website and analyzed for protein class. The data exported from this was then transferred to GraphPad Prism (GraphPad Software Inc., San Diego, CA, USA) to create charts used in figures.
Western blotting
A standard Western blotting protocol was followed. Briefly, EV samples were solubilized in Laemmli buffer and approximately 5 μg of each sample was loaded onto SDS-polyacrylamide (PAGE) gels, separated electophoretically, and transferred to nitrocellulose membranes. The membranes were blocked with 5% bovine serum albumin (BSA) in tris-buffered saline with 0.01% Tween-20 (TBST) for 1 hour at room temperature on a rocking platform. After blocking, membranes were incubated with primary antibodies in blocking buffer at 4°C overnight. The antibodies used were CD9 (ab263019; Abcam, Cambridge, UK), TSG101 (ab125011; Abcam), Calnexin (ab133615; Abcam), albumin (ab207327; Abcam), epidermal growth factor (EGF)-like repeats and discoidin domains 3 (EDIL3; ab190692; Abcam), and fibronectin (ab6328; Abcam). Primary antibodies were removed, and membranes were washed three times for 10 minutes at room temperature in TBST. Horseradish peroxidase (HRP)-conjugated secondary antibodies [goat anti-mouse (#115-035-146) and goat anti-rabbit (#111-035-144); Jackson ImmunoResearch, West Grove, PA, USA] were added and incubated at room temperature for 1 hour. The membranes were washed again, as before, and developed using chemiluminescent reagents (SuperSignal TM West Atto; A38555; Thermo Fisher Scientific). The membranes were then imaged using the ChemiDoc TM Imaging System (BioRad) or the iBright Imaging System (Invitrogen). Protein band intensity was normalized to the concentration of particles (NTA) per protein (ug), which was measured by bicinchoninic acid (BCA) assay, as previously described ( 21 ).
Nanoparticle tracking analysis
The ZetaView nanoparticle tracking analysis instrument (Particle Metrix, Ammersee, Germany) was used to determine exosome vesicle diameter and estimated particle concentration. For analysis, the instrument was calibrated for size using 100-nm polystyrene beads and the sample material was diluted to a concentration of 1: 5,000 in EV-free PBS. Averages of measurements taken in triplicate from eight positions within the imaging chamber at 25°C under a 405-nm laser were used to estimate vesicle size and concentration.
Statistical analysis
Data are presented as the average (confidence interval range). The Student’s t -test was used to assess statistical significance between groups, with a p -value < 0.05 determined as being statistically significant. | Results
Characteristics of small extracellular vesicles released from TM cells
The EVs were isolated from conditioned media from glaucomatous TM (GTM) cells and non-glaucomatous or “normal” TM (NTM) cells. The gradient fractions with densities of approximately 1.05 g/mL–1.10 g/mL (fractions 5–8) and approximately 1.11 g/mL–1.17 g/mL (fractions 9–10) were analyzed.
Nanoparticle tracking analysis (NTA) was used to determine the size distribution and concentration of released EVs. The results showed no significant difference in the number of EVs released from the NTM and GTM samples—although there was some variability seen between the cell strains ( Figure 1A ). The size of EVs from both the NTM and GTM cells were within the expected size range (30 nm–150 nm) for sEVs and there was no significant difference in the size distribution between the groups ( Figure 1B ).
Western blotting was conducted to determine the presence of EV markers CD9 and TSG101 in the EV preparations—both CD9 and TSG101 were found in fractions 5–8, but not as consistently in fractions 9 and 10 ( Figure 1C ). The preparations were negative for albumin and calnexin, indicating that they were free from cellular debris and thus pure sEV preparations ( Figure 1C ).
Proteomic analysis of sEVs released from TM cells
To assess the proteomic cargo of EVs isolated from TM cells, we used mass spectrometry and validated target proteins by Western blotting. Mass spectrometry was conducted on isolated EVs and the top 100 most abundant proteins from the proteomic datasets were analyzed ( Figure 2 ). The summaries of the top 100 proteins from the NTM EVs and the GTM EVs are presented in Tables 2 , 3 , respectively. The complete proteomic datasets for the NTM and GTM groups are shown in Supplementary Tables 1 , 2 , respectively. The Venn diagrams demonstrate the distinct proteomic cargos of the NTM and GTM EVs—only 35 the 100 most abundant proteins in each group were shared between the groups. In both groups, fibronectin was the most abundant protein found on the sEVs from the TM cells. Many of the other most abundant proteins in each group were ECM or cytoskeleton related; however, they differed between groups. On the NTM sEVs, collagen isoforms, laminin isoforms, emilin, and other ECM proteins constituted the most abundant proteins. In contrast, on the GTM sEVs, fibrillin, plectin, and actins were the most abundant proteins. Collectively, this demonstrates the presence of ECM glycoproteins on NTM sEVs, compared with cytoskeleton- and actin-related proteins on GTM sEVs
PANTHER analysis of 100 most abundant proteins in normal and glaucoma TM sEVs
Next, using the PANTHER database, we determined the protein class differences between the NTM and GTM groups ( Figure 3 ) ( 35 , 36 ). We analyzed the differences between all isolated sEVs ( Figures 3A , D ), as well as separating each of our sEV sub-populations into fractions 5–8 ( Figures 3B , E ) and fractions 9 and10 ( Figures 3C , F ).
From the PANTHER database, the protein class analysis in the total NTM vs. GTM dataset ( Figures 3A , D ) showed a lower percentage of extracellular matrix proteins associated with sEVs from GTM cells than from NTM cells (5.7% vs. 13.1%, respectively). In the sEV subpopulation consisting of fractions 5–8 ( Figures 3B , E ), there was an increased percentage of ECM proteins found in the GTM EVs compared with the NTM EVs (4.9% vs. 2.8%). In the sEV subpopulation consisting of fractions 9 and 10 ( Figures 3C , F ), there was a decreased percentage of ECM proteins associated with the GTM EVs compared with the NTM EVs (5.7% vs. 15.1%).
There was also a lower percentage of cell adhesion molecules (CAMs) on the GTM EVs than on the NTM EVs (3.8% vs. 11.2%) This was also demonstrated in the 5–8 sEV fraction group (GTM 3.9% vs. NTM 8.3%) and in the 9 and 10 fraction group (GTM 3.8% vs. NTM 8.5%).
However, the GTM EVs had a higher percentage of cytoskeletal proteins associated with them than NTM EVs (24.7% vs. 11.2%, respectively). This increase in cytoskeletal proteins was maintained in both the 5–8 sEV subpopulation (GTM 20.4% vs. NTM 11.1%) and in the 9 and 10 sEV subpopulation (GTM 21.0% vs. NTM 12.3%).
Decreased levels of fibronectin and EDIL3 associated with sEV from GTM cells
Two of the most abundant proteins found in sEVs across all biological replicates from NTM and GTM cells were fibronectin and EDIL3. Fibronectin is a major component of the ECM in the TM ( 37 ), and EDIL3 is a ligand of integrin αV/β3 that promotes endothelial cell adhesion and migration, promotes epithelial-to-mesenchymal transition, and is associated with both endothelial cells and the extracellular matrix ( 38 – 44 ). When validating these proteins, we used two unique populations of EVs from both the NTM and GTM groups—sEVs from fractions 5–8 and sEVs from fractions 9 and 10.
We validated the presence of these ECM-related proteins within the EV cargo by Western blotting. Quantification of the bands was conducted, and the band intensity was normalized to the number of particles per sample. Figure 4 shows a decrease in Fn associated with EVs from GTM compared with EVs from NTM cells in both EV populations, i.e., fractions 5–8 [fold change 0.21 (−0.18, 0.6); p = 0.1476] and fractions 9 and 10 [0.03 (−0.01, 0.06); p = 0.0011]. Figure 4 shows the decreased abundance of EDIL3 in EVs from GTM cells compared with NTM cells in both EV populations, i.e., fractions 5–8 [0.14 (−0.16, 0.44); p = 0.0707] and fractions 9–10 [0.11 (−0.06, 0.29); p = 0.0195]. | Discussion
The current study rigorously profiled the proteome of sEVs from GTM and NTM cells. Although we observed some similarities, the overall proteomic profiles of the sEV cargo from the NTM and GTM cells were very different. Specifically, there were decreased numbers of ECM proteins in the GTM sEVs. The two major ECM proteins, fibronectin ( 37 ) and EDIL3 ( 38 ), were significantly decreased in the GTM sEVs compared with the NTM sEVs. Taken together, our in vitro findings were consistent with the aberrant accumulation of ECM materials in glaucoma in vivo , which likely contributes to increased outflow resistance, and, therefore, increased IOP.
Although we investigated sEV released from TM cells specifically, it is highly possible that the sEVs released from other cells in the eye may also affect the TM tissue as the AH moves through the conventional outflow pathway and exits the eye. One such potential source of sEVs is the non-pigmented ciliary epithelium, which has been shown to release sEVs and influence ECM remodeling in the TM ( 45 ). This element is a limitation of the current study as we did not examine the effects of non-TM sEVs on TM cells.
When assessing the proteomics dataset, we only examined the 100 most abundant proteins present in each of our sEV populations, as lower-abundance proteins were more likely to represent contaminating proteins. For this analysis, the PANTHER database was utilized to assess the classes of proteins associated with the sEV populations—we separately analyzed the total sEV population and the two separate subpopulations. This was in order to sufficiently compare the sEVs that we were able to isolate from the TM cells, as different sEV populations released from the cells may perform different functions in physiologically normal and diseased states. In line with previous studies, we found that there was a decreased number of ECM proteins associated with sEVs from GTM cells when compared with those associated with NTM sEVs ( 21 ). This was true for both the overall sEV population and the fractions 9 and 10 subpopulation—both showed approximately 50% less ECM proteins in the GTM sEV group. There were also more CAMs found on the sEVs from NTM cells. CAMs are involved with ECM protein binding, which is a likely mechanism for the differences in ECM regulation/protein binding by sEVs. The results indicate that the different sEV populations have different functions within the TM.
Based on previous studies, we examined the binding of fibronectin with the GTM sEVs—fibronectin has previously been shown to be a highly abundant protein on TM sEVs ( 21 , 23 ). In the presence of dexamethasone, a steroid that induces a glaucoma phenotype in vitro and is known to cause elevated IOP and glaucoma in patients, there were decreased levels of fibronectin in the TM sEVs ( 21 ). Consistent with these data, our study showed that fibronectin was one of the most abundant proteins across our TM sEV samples; however, it was decreased in the GTM sEVs compared with the NTM sEVs. Although fibronectin showed a decrease in both subpopulations of sEVs, the decrease only reached significance in the sEVs from the subpopulation comprising fractions 9 and 10. This may be because of the small number of biological replicates used—we used only three POAG cell strains and compared this with four non-glaucoma cell strains by Western blotting. Our hypothesis is that the binding capacity of sEVs from GTM cells is altered, and, therefore, these sEVs did not bind fibronectin to deliver proteases such as MMPs ( 46 ) to degrade it, or to target it for phagocytosis by TM cells. If sEVs from GTM cells are not contributing to fibronectin degradation, excess fibronectin will be found in the TM contributing to the ECM buildup and increased outflow resistance. Significantly, to our knowledge, there are no other studies that have compared sEVs from NTM and GTM cells. Instead, previous studies examined sEVs from NTM cells using single cell strains with technical replicates, or up to six biologically independent cell strains ( 23 , 47 , 48 ).
An abundant protein observed in all samples was EDIL3—an ECM protein involved in endothelial cell adhesion, migration, and angiogenesis when bound with integrins ( 38 – 40 ). EDIL3 is also associated with epithelial–mesenchymal transition, which is indicative of a more fibrotic phenotype and environment, and is linked to the transforming growth factor beta (TGFβ) signaling pathway ( 41 – 43 ). TGFβ is a key driver of ECM production and fibrosis in the TM and TGFβ has been shown to be increased in the aqueous humor of patients with glaucoma ( 49 – 52 ). The results here show a decrease in EDIL3 in GTM sEVs compared with NTM sEVs, indicating that there may be more EDIL3 present in the GTM owing to it not being removed via sEVs. Zhang et al. showed that depletion of EDIL3 suppressed the proliferation and migration of lens epithelial cells, decreased the expression of α-smooth muscle actin and vimentin, and decreased Smad2 and Smad3 phosphorylation ( 43 ). In glaucoma, α-smooth muscle actin and vimentin are altered and can be induced by TGFβ ( 53 , 54 ). Increased levels of EDIL3 in the TM cells would likely cause increased expression of profibrotic factors, such as α-smooth muscle actin and vimentin, and elements of the TGFβ pathway, which may lead to increased ECM accumulation in the TM cells owing to TGFβ activity.
Finally, when looking at the proteomic datasets overall, it is apparent that the protein cargo of sEVs differs greatly between the GTM and NTM groups. Out of the top 100 most abundant proteins, there were only 35 overlapping proteins between the two groups, and across the whole proteome, only approximately 350 proteins overlapped. Of these, many were ribosomal proteins and extracellular matrix proteins. This shows that sEVs play different roles in diseased and non-diseased cells. Furthermore, the most abundant ECM proteins are quite different between the two groups, which also demonstrates that their roles in ECM homeostasis are also distinct from each other.
Taken together, the data presented here indicate that the sEVs released from TM cells likely play a role in ECM binding and thus turnover in the conventional outflow pathway. The significant differences in the ECM profiles of sEVs from TM cells isolated from healthy versus glaucomatous donor eyes suggest that dysfunctional binding and opsonization of the ECM in glaucomatous eyes may contribute to decreased ECM degradation in the TM, ultimately leading to increased IOP. | Author contributions
FM: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing. BR: Data curation, Formal analysis, Investigation, Writing – review & editing. HR: Data curation, Investigation, Writing – review & editing. NS: Data curation, Formal analysis, Investigation, Methodology, Writing – review & editing. WS: Conceptualization, Funding acquisition, Project administration, Resources, Supervision, Writing – review & editing.
Introduction:
Extracellular matrix (ECM) materials accumulate in the trabecular meshwork (TM) tissue of patients with glaucoma, which is associated with a decrease in aqueous humor outflow and therefore an increase in intraocular pressure. To explore a potential mechanism for ECM regulation in the TM, we purified extracellular vesicles (EVs) from conditioned media of differentiated TM cells in culture isolated from non-glaucomatous and glaucomatous human donor eyes.
Methods:
EVs were purified using the double cushion ultracentrifugation gradient method. Fractions containing EV markers CD9 and TSG101 were analyzed using nanoparticle tracking analysis to determine their size and concentration. We then determined their proteomic cargo by mass spectrometry and compared protein profiles of EVs between normal and glaucomatous TM cells using PANTHER. Key protein components from EV preparations were validated with Western blotting.
Results:
Results showed changes in the percentage of ECM proteins associated with EVs from glaucomatous TM cells compared to non-glaucomatous TM cells (5.7% vs 13.1% respectively). Correspondingly, we found that two ECM-related cargo proteins found across all samples, fibronectin and EDIL3 were significantly less abundant in glaucomatous EVs (<0.3 fold change across all groups) compared to non-glaucomatous EVs.
Discussion:
Overall, these data establish that ECM materials are prominent proteomic cargo in EVs from TM cells, and their binding to EVs is diminished in glaucoma. | Supplementary Material | Funding
The authors declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by National Institutes of Health Grant EY031737 (FSM), EY022359 (WDS) and National Institutes of Health Core Grant EY014800 (Moran Eye Center) and EY005722 (Duke Eye Center), and an Unrestricted Grant from Research to Prevent Blindness, New York, NY, to the Department of Ophthalmology & Visual Sciences, University of Utah.
Data availability statement
The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE ( 55 ) partner repository with the dataset identifier PXD044913 and 10.6019/PXD044913 . | CC BY | no | 2024-01-16 23:35:03 | Front Ophthalmol (Lausanne). 2023 Oct 6; 3:1257737 | oa_package/f4/6c/PMC10785745.tar.gz |
|
PMC10785782 | 38222943 | Introduction
Building computational and mathematical models to simulate complex non-linear biological processes requires many key steps in defining both the model as well as identifying ranges of values for many corresponding parameters. Complex models often require concurrent estimation of dozens of parameters using reference datasets derived from biological experiments. However, the step of identifying relevant ranges of parameter values in a complex model complicates the step of parameter estimation because traditional methods that find parameter point estimates are not useful and instead parameter estimation methods must identify ranges of biologically plausible parameter values. For example, models built to study infectious diseases would require parameter ranges wide enough to produce biological variation that would span healthy and disease outcomes. This atypically broad objective function of finding multiple solutions is sometimes referred to as suboptimal non-linear filtering [ 1 ]. In this paper, identifying acceptable ranges of model parameters is called calibration which is contrasted against traditional parameter estimation in Figure 1 .
The choice of calibration method depends on the reference experimental datasets as well as the model type being calibrated. Calibrating to reference experimental datasets such as dynamical and/or spatial experimental datasets are discussed in the next section. The remaining sections cover the calibration of different model types. Models that have tractable likelihood functions and therefore are non-complex, do not require the methods discussed in this paper. Models of ordinary differential equations (ODEs) that fall under this non-complex scenario are briefly discussed. Complex models contain many structurally unidentifiable parameters requiring particular attention to calibration [ 2 ]. For the remaining complex models such as complex ODEs, partial differential equations (PDEs), agent-based models (ABMs), hybrid models, etc., we provide a decision tree to suggest which calibration method is most appropriate by examining both model type as well as characteristics of corresponding datasets. We explore three calibration methods, namely, calibration protocol (CaliPro), approximate Bayesian computing (ABC), and stochastic approximation. To aid discussion of this wide range of methods, we provide descriptions of phases and keywords for quick reference (see Table 1 ).
Characteristics of reference experimental datasets
A defining feature of complex biological systems is their incomplete, partially observable, and unobservable datasets. The uncertainty of incomplete and partially observable experimental results favors fitting model simulations to the boundaries of such datasets more often than considering modes within their distribution ranges as significant. Therefore, the limitation of these partial datasets justifies calibrating to ranges of data rather than to individual data points. On the other hand, unobservable data require modelers to assign parameters representing biological processes that may not be experimentally validated, but whose parameter ranges need to be calibrated alongside other experimental datasets. The number of non-experimentally bound parameters typically exceed the parameters that can be directly bound to available datasets, creating many degrees of freedom in the calibration process.
Several types of reference experimental datasets such as numerical, categorical, temporal, spatial, and synthetic datasets may be used to calibrate complex models. Numerical datasets can either be continuous or discrete and are well supported by inference methods even when missing data [ 6 – 8 ]. Calibrating a dynamical model to temporal datasets typically requires several comparisons along simulated trajectories to the temporal datasets. Calibrating a model to spatial datasets requires finding appropriate numerical or categorical summary statistics that may include image pre-processing steps to identify and match features of interest [ 9 – 11 ].
Calibrating non-complex models with numerical likelihood evaluation
Non-complex models, such as biological models that are comprised of systems of ordinary differential equations (ODEs), it is much easier to identify ranges for parameter values from corresponding datasets. Likelihood functions describe the joint probability of observed datasets and the chosen model. Thus, evaluating the likelihood function directly links model parameters with experimental data and guides the calibration process to directly identify parameter ranges. Likelihood calculation is central to probabilistic modeling [ 12 ]. Although likelihood-based parameter estimation methods are typically used when analytical solutions are not available, likelihoods can be found for such models to calibrate their parameters. For example, ODEs can produce exact probability density functions using the method of characteristics, which are used to calculate their likelihood [ 13 ]. However, calculating likelihood becomes unobtainable for complex models therefore requiring a different approach.
Background concepts to calibrate complex models
Above we described the case for models that can obtain parameters using a likelihood function determined from direct fits to data. In the next section we will review two methods that estimate parameter ranges for complex biological systems where likelihood functions are not obtainable. We describe two published methods, the Calibration Protocol (CaliPro) [ 14 ] and approximate Bayesian computing (ABC) [ 15 – 18 ]. In this work, our goal is to review these model calibration methods and to compare and contrast them. We first set up background information that is applicable to both approaches and then provide more detail for each approach via examples. We provide a decision tree to help guide which approach to use ( Figure 2 ).
Inapplicability of stochastic approximation
Although stochastic approximation methods can also be used in contexts when the likelihood function is unavailable, this method does not directly serve our purpose of calibration. Stochastic approximation either uses a variant of finite differences to construct a stochastic gradient or stochastic perturbations that are gradient-free [ 19 , 20 ]. However, both variants attempt to converge to an approximate maximum likelihood and then find the variance around the converged result using bootstrap. This is not the same as the intended calibration goal of preserving broad parameter sampling around parameter space containing multiple solutions that fit the experimental data.
Sampling outcome and parameter spaces
Sampling experimental datasets can be thought of as sampling multidimensional outcome space ( Figure 3 ). Datasets may not be available for some outcomes and thus increase the burden of parameter sampling. The challenges of limited datasets and high-dimensional parameter and outcome spaces motivate the need to use careful parameter sampling schemes.
Parameter sampling schemes
Complex models with their many parameters can be thought of as forming a hypercube of high-dimensional parameter space. Increasing the number of model parameters or dimensions exponentially increases the combinatorial complexity of visiting parameter space on an evenly spaced grid of a discretized parameter hypercube.
However, an evenly and finely spaced parameter grid required to sample these regions adequately in parameter space is not necessarily linear. Each parameter has an associated probability value for a particular parameter value. Only uniform probability distributions are linear for a given range because any value of such a parameter within the bounds of the uniform distribution has the same probability. On the other hand, parameters with non-uniform probability distributions require generating samples in accordance with their cumulative probability density ( Figure 4 ). This allows measurements to better capture characteristic skewness, etc., to adjust for inferring the true parameter distribution.
Another consequence of high-dimensional parameter space is that it cannot be exhaustively sampled. Thus, sampling methods need to strategically stratify the space and choose parameter values for a particular calibration method. The two main strategies are global and local sampling. Global sampling schemes such as Latin hypercube sampling (LHS) ( Table 2 ), Sobol sampling, and random sampling—also called Monte Carlo sampling—provide a means of broadly exploring parameter space, studied extensively in Renardy et al. [ 22 ]. On the other hand, local sampling schemes depend on previously sampled values to suggest future values. We review such local sampling schemes in more detail in the section on Approximate Bayesian Computing.
Parameter sampling with pseudo-likelihood evaluation
Complex models lack both closed-form analytical solutions and numerical approximations of likelihood, which restricts parameter range estimation methods to iterative parameter sampling. Between iterations many parameters need to be varied and the model outcomes need to be continuously re-evaluated for goodness of fit to available experimental datasets.
For non-complex models, goodness of fit to experimental data would involve likelihood functions as described above (and in Table 1 ). However, complex models do not have tractable likelihood functions and therefore require alternative methods of comparing to experimental datasets. Instead of evaluating likelihood functions, methods for fitting complex models rely on comparisons to experimental data ( Figures 5 , 6 ). Such pseudo-likelihood evaluations are therefore used for fitting complex model parameters.
Methods to calibrate complex models
Now that we have narrowed the scope of calibration to models requiring pseudo-likelihood functions, as well as observed the need for iterative sampling to fully explore data fits for parameter space, we will detail two broad classes of methods we use to accomplish this task: CaliPro and ABC. CaliPro is introduced first because it is the more intuitive of the two methods and derives from published work from our own group.
Calibration protocol
We recently formalized a calibration protocol (CaliPro) method to calibrate complex biological models while also remaining agnostic to model type [ 14 ]. The goal of CaliPro is to adjust parameter boundaries to capture large and disparate datasets using a minimal number of iterations to converge to an acceptable fit. CaliPro classifies simulations into pass and fail sets ( Figure 6 ). We use both pass and fail classifications of model simulations to adjust parameter boundaries using CaliPro’s alternative density subtraction (ADS) method. Alternatively, for parameters with smaller ranges and less variance, the highest density region (HDR) parameter adjustment method provides faster convergence because HDR adjusts parameter ranges to regions of higher probability density whereas ADS first subtracts the probability density of fail sets to output smaller changes. Finally, we use Boolean function thresholds to define pass rates. In practice, >75–90% of simulations passing is sufficient to end calibration, as over constraining the convergence function risks overfitting parameters.
Using CaliPro in practice requires attention to several details. To establish model-specific pass-fail constraints of model simulations, we require a priori knowledge of the biological system and thus this step is based on user discretion. We previously discussed several examples of establishing such pass-fail constraints [ 14 ]. Secondly, when we first use CaliPro or after making significant changes to the model, the pass rate may be very low and may not improve after several iterations. This is because the low pass rate is too uninformative for the parameter range adjustment method to propose useful new parameter ranges. In such cases, the more stringent among the constraints employed should be disabled after measuring the pass set from each individual constraint and then those stringent constraints can be reapplied later after achieving higher pass rates. Thirdly, to calibrate large numbers of unknown parameters, one can reduce the number of fail set outcomes by paying attention to starting values of the most sensitive parameters. The most sensitive parameters that affect the model can be determined using partial rank correlations (PRC) [ 21 ]. Lastly, for stochastic models that have variable outcomes even with fixed parameters, the number of pass sets can plateau in an undesirable part of parameter space therefore requiring multiple starting seeds for at least some simulations to converge.
A larger framework to which CaliPro belongs for complex model calibration is approximate Bayesian computing (ABC) because of the many similarities between the independently developed techniques. Approximate Bayesian computing was mentioned in Figure 2 of the CaliPro method paper [ 14 ], but the techniques were not directly compared. This paper helps address that gap.
Approximate Bayesian computing
Approximate Bayesian computing (ABC) works around the difficulty of numerically evaluating model likelihood functions by simulating model outcomes and then often applying a distance function or a kernel density estimator to compare to reference datasets ( Figure 5 ).
ABC requires a parameter sampling strategy to generate distributions of parameters of interest. Nearly all sampling strategies used in practice for ABC are techniques that sample around previously sampled locations using Markov processes and weights that are iteratively updated to guide subsequent sampling [ 18 ]. Sequential Monte Carlo (SMC) sampling uses hidden states to affect a slowly changing distribution to efficiently reach the true parameter distributions [ 18 , 24 ]. Unlike most other Monte Carlo sampling schemes used in Bayesian inference where chains primarily measure the r-hat quality of sampled parameters, SMC chains accelerate exploration of parameter space and is therefore the sampling technique frequently used for ABC calibration.
Summary statistics and their sufficiency
Summarizing model outcomes or experimental datasets is necessary in these pseudo-likelihood parameter sampling situations when the model output does not exactly match the type of data. Applying summaries to model outcomes or datasets allow them to be numerically compared. An example of a model summary statistic would be the total diameter of a tumor calculated from the corresponding model simulated spatial components.
For some applications, the general case of summarizing many outcomes of non-linear complex models is prone to inefficient inference or even non-convergence to the true parameter distribution when the summary statistics are not sufficient [ 25 ] as described in Table 1 . The error introduced by not having sufficient summary statistics is not measurable because the likelihood function is not available [ 26 ]. Sufficient statistics are the property of summary statistics containing as much information as the parameter samples (see Table 1 for definitions). As mentioned, truly knowing whether a summary statistic is sufficient also requires a likelihood function, therefore this validation is impractical for a complex model [ 26 ]. If a summary statistic is not sufficient, convergence to the true parameter distribution is not guaranteed. As a workaround, one can use probability density approximation (PDA) to avoid using summary statistics [ 25 ].
Due to this risk of insufficiency when using summary statistics, it is preferable to avoid using summary statistics or to limit their use to cases that require it, such as calibrating to spatial datasets, where simulations, for example, of cell type ratios or intercellular proximities must be collectively expressed as summaries rather than raw counts. Summary statistics are used in the first place to reduce the model outcome dimensionality to compare with experimental datasets. Therefore, given the importance of sufficiency and difficulty of knowing the quality of summary statistics, it is useful to have alternative calibration methods that do not require using sufficient statistics such as our method, CaliPro, and also ABC-PDA [ 25 ].
In addition to using summary statistics, another strategy to improve convergence is to use a Markov process (described in Table 1 ). This improves fitting parameter ranges by smoothing the changing parameter distribution from its initial samples to the true distribution. Using a Markov process makes it efficient to process many parameter samples [ 18 ] and is the basis of sequential Monte Carlo sampling and sequential importance sampling. Both algorithms are widely used and are known in different fields by different names such as bootstrap filtering, the condensation algorithm, particle filtering, interacting particle approximations, survival of the fittest, and recursive Bayesian filtering [ 1 ]. Particle filtering is often used to describe these sampling algorithms; therefore, we define a particle in this context. Simulation outcomes are thought of as particles consisting of intersecting probability distributions of lower dimensional model parameter summaries. Particles have associated weights and those weights are used to iteratively resample or move particles to better approximate the true parameter distributions [ 27 ]. Thus, calibration requires using sampling techniques that can scale to a large number of parameter samples of complex models using particle filtering and Markov smoothing techniques. | Method
Here we outline how we perform both the CaliPro and ABC calculations. In our previously published work, CaliPro was only used to tune uniform distributions; to compare CaliPro more directly to ABC, here we extend CaliPro’s uniform distribution boundaries to non-uniform distributions. We do this by fitting non-uniform distribution parameters using both the boundaries along with their globally sampled percentiles. To perform ABC, we used the PyMC package [ 28 ] with the Metropolis–Hastings kernel. We also tried using the pyABC package [ 29 ], but each calibration attempt ran out of memory even on large memory computer clusters. The conceptual Figures 1 – 6 were creating using LaTeX with the PGF-TikZ package [ 30 , 31 ]. For the remaining figures, all example models with their associated data, commented code, and output files are archived on Zenodo [ 32 ].
Calibration protocol with Latin hypercube sampling
To improve useability and understanding of LHS and CaliPro, these general methods have been implemented in R using the lhs package [ 33 ]. The CaliPro pass–fail criteria are described in the results section for each of the models. No termination pass percentage was used and instead calibration was allowed to continue for a pre-determined number of iterations.
All parameter updates are done using the alternative density subtraction (ADS) algorithm. ADS outputs parameter boundaries are originally intended for uniform distributions. To extend ADS to other types of distributions, we use the LHS percentiles sampled along with the new distribution drawn boundaries to fit the parameter distributions between iterations (see Section 2.1.1 ).
Fitting probability distributions using both percentiles and distribution draws
Probability distributions are conventionally fit using many distribution draws. However, both CaliPro’s HDR or ADS algorithms provide uniform distribution boundaries as outputs, which we then need to fit to non-uniform distributions. To meaningfully use the limited two distribution draws of the boundaries, we also need to know the percentiles to which those data points belong. This idea of using both the distribution draws and their percentiles is also useful for setting initial parameter distributions from biological journals and clinical trial datasets that are often reported in the form of 3 data points: the median and interquartile range, which together provide distribution draws for the 25, 50, and 75% quantiles. This was necessary for fitting distributions to the parameters of the immune-HIV-1/AIDS example model. Reporting experimental parameters using quantiles implies that distributions cannot be fit in the conventional way using maximum likelihood of a large collection of distribution draws. Instead, we use both the known distribution draws x , and their corresponding known percentiles, p , to fit the unknown distribution parameters, , using optimization. We supply percentiles, p , with the estimated distribution parameters, , to the inverse cumulative density function to obtain estimated distribution draws, , and then compare against the known distribution draws, x , to minimize the prediction error while tuning . We detail the corresponding equations for obtaining this distribution fit below. We used the L-BFGS-B bounded optimization method [ 34 ] implemented by the optim () function of the stats R package [ 35 ] to fit the distribution parameters.
where,
Distribution probability density function (PDF)
Distribution cumulative density function (CDF)
Distribution percentile function or inverse CDF
Known distribution draws
Known percentiles corresponding to known distribution draws
Estimated distribution parameters that are being optimized/fit
Estimated distribution draws from the known percentile and estimated parameters
Number of known distribution draws with corresponding known percentiles.
T-statistic stochastic neighbor embedding plots
The t-SNE coordinates were calculated using the Rtsne package [ 36 – 38 ]. These plots help visualize higher-dimensional parameter space sampled by LHS.
Approximate Bayesian computing with sequential Monte Carlo sampling
We ran the ABC-SMC inference method for the example models using the PyMC package [ 28 ] version 5.3.0 in the python programming language. For the immune-HIV-1/AIDS model, we customized the solver and distance comparison as follows:
We replaced the default SMC multivariate-normal kernel to the metropolis–hastings kernel as a workaround for crashes from incomplete model simulations.
Instead of comparing simulations against multiple patient CD4+ cell count timeseries, we chose only a single patient timeseries to avoid use of summary statistics due to hard-to-diagnose errors from deferred evaluations of the pytensor symbolic expressions when attempting to run a summary statistics function.
The patient timeseries is known to be non-progressive HIV infection. Therefore, to minimize the fixed error from the distance function while the simulation reaches steady state, the first 5 years of the simulation are omitted, and the 5th year onward is compared against the 10 years of patient timeseries. | Results
Our goal is to apply both CaliPro and ABC approaches to calibrate two different examples and compare them: a non-complex and complex model. The following models of ordinary differential equations (ODEs), will be evaluated: the classic predator–prey model [ 39 , 40 ], and a viral–host response model of HIV-1/AIDS infection [ 41 ]. While stochastic models including agent-based models are of particular interest for these calibration techniques, directly calibrating such large models is beyond the scope of this work and instead we discuss these models using examples already published.
Finally, we will compare calibration performances of CaliPro-LHS against ABC-SMC to show practical strengths and weaknesses of each. These approaches will guide modelers to explore parameter space of complex non-linear models to incomplete experimental datasets.
Ordinary differential equation models
Lotka–Volterra
The two-equation predator–prey ODE model [ 39 , 40 ] is one of the simplest systems to evaluate fitting against noisy simulated data: where,
prey populationg
predator population
prey growth rate = 1.0 [per year]
prey death rate = 0.1 [per year]
predator death rate = 1.50 [per year]
predator growth rate = 0.75 [pear year]
Initial values:
initial prey population = 10.0
initial predator population = 5.0
Priors:
[per year]
[per year] (detuned from training data)
[pear year] (fixed)
[per year] (fixed)
The simulated data instead uses the parameter β = 0.1 and adds random noise drawn from a standard Normal distribution. The uncalibrated prey death rate parameter β = 1.0 causes the prey population to crash early on and therefore the predator population to also crash; they only recover values close to their original population levels starting from year 5 onward. The uncalibrated population trajectories are shown by sampling from the prior distribution. We show results of varying and calibrating the α and β parameters to a noisy dataset using while keeping parameters γ and δ fixed.
When we use CaliPro for this set of dependent parameters even for this non-complex model, we see a limitation of global LHS sampling: the α and β parameters are dependent, but LHS assumes the sampled parameters are independent. We show the parameter ranges adjusted by CaliPro oscillate between two very similar ranges ( Figures 7 , 8 ). The way to work around this issue of parameter dependence is to simply fix one of the parameters and calibrate the other. Nevertheless, we show this calibration result of varying both parameters to directly compare with the calibration result from the ABC-SMC method.
For ABC-SMC, we sample 2000 samples for each iteration until SMC beta convergence ( Figure 9A ). We subsample trajectories from parameters before and after calibration. The calibrated parameters are much closer to the noisy dataset from using the distance function rather than the wider CaliPro boundaries, and the expected value of the β parameter is closer to the 0.1 value used to generate the noisy data ( Figure 9B ).
Immune-HIV-1/AIDS model
The four equation model of immune-HIV-1/AIDS infection [ 41 ] offers additional complexity over Lotka–Volterra model as it has 8 parameters and also oscillatory regions of parameter space. The oscillatory regions are challenging for the solver and the solver will often fail for combinations of parameters that produce sharp oscillations. Therefore, a calibration method needs to be resilient to sampled parameter combinations that result in incomplete or unavailable simulations. The model is: where,
Uninfected CD 4 + cells
Latently infected CD 4 + cells
Actively infected CD 4 + cells
HIV cells
Rate of supply of CD 4 + cells from precursors (day −1 mm −3 )
Rate of growth for the CD 4 + cells (day −1 )
Maximum CD 4 + cells (mm −3 )
Death rate of uninfected and latently infected CD 4 + cells (day −1 )
Death rate of actively infected CD 4 + cells (day −1 )
Death rate of free virus (day −1 )
Rate constant for CD + becoming infected (mm 3 day −1 )
Rate latently to actively infected conversion (day −1 )
Number of free viruses produced by lysing a CD 4 + cell (counts)
Initial values:
Priors:
[day −1 mm −3 ]
[day −1 ]
(fixed)
[day −1 ]
[day −1 ]
[day −1 ]
[day 3 mm −1 ]
[day −1 ]
Negative-Binomial ( n = 13.5, p = 0.0148) [counts]
The immune-HIV-1/AIDS model that we originally published used uniform distribution boundaries for all parameters. However, to make the Bayesian and CaliPro approaches comparable, we treated the bounds of the uniform distributions as percentiles to fit the gamma and negative-binomial distributions so that the sampler could more widely explore parameter space. In addition to the oscillatory regions of parameter space, this wider parameter space is an additional challenge imposed rather than the approach of detuning parameters that we used in the previous example.
Applying CaliPro classifies simulations into pass and fail sets to adjust parameter ranges, and these classifications are based on user discretionary boundaries that fit the reference data [ 14 ]. We overlay shows the uninfected CD4 + T-cell counts of 6 patients [ 42 ] with the CaliPro Boolean pass-fail region surrounding all the tracks rounded to the nearest hundred counts, namely 300 and 2100 ( Figure 10A ). Across the 5 LHS replicates, 83−92% of simulations pass using this criterion without making any adjustments and therefore we do not need to further calibrate as it risks overfitting the model. Nevertheless, to better understand how CaliPro and ABC methods handle the immune-HIV-1/AIDS model with its problematic regions of outcome space, we simulated CaliPro for 50 iterations so that the parameter fitting can be compared against ABC. CaliPro was able to identify parameter ranges with >90% passing simulations at four later iterations (at iterations 27, 29, 42, and 48) indicated by the peak dots in Figure 10B , that summarize the pass-fail graphs in Figure 10C , and the corresponding parameters highlighted in Figure 11A . The grouping of pass and fail simulations in parameter space is shown in Figures 11B , C .
Applying ABC, the parameters settled on were nearly a magnitude away on either side of the reference patient data even when fixing most of the parameters to values we know do not produce oscillations ( Figure 12 ). One reason for this may be that the kernel parameter updates are less tolerant than CaliPro to any missing model simulations, caused by failing with certain sets of parameters. To handle missing simulations, the ABC calibration framework may need to assign infinite distances to incomplete simulations and treat those specifically when computing the next proposed parameters in the SMC chains. Together, these two ODE examples shed light on the strengths and weaknesses of these methods when applied to dependent parameters and to models with holes in parameter space. We compare the two approaches in more detail next.
Calibration of stochastic models
Comparing the CaliPro-LHS and ABC-SMC methods on identical complex stochastic models is beyond the scope of this work. Use of these methods separately on stochastic models been previously described as follows. CaliPro-LHS has been used to calibrate the GranSim stochastic agent-based model that captures formation of lung structures called granulomas during infection with Mycobacterium tuberculosis [ 14 ]. ABC-SMC has been used to calibrate a tumor spheroid growth stochastic agent-based model [ 43 ] and stochastic models of cell division and differentiation [ 44 ].
Stochastic models have both aleatory and epistemic uncertainty. Aleatory arises due to uncertainty in parameter estimates, and additional uncertainty (epistemic) arises from stochastic components of the model. We have talked mostly about aleatory in this work; however, the main difference when calibrating stochastic models, is this epistemic uncertainty. Thus, there is an additional requirement to use the same parameter set but the model must be simulated with different random number generator seeds at least 3–5 times and then model outputs aggregated. This reduces the epistemic uncertainty. To prevent any additional complexity in the calibration code, the model executable itself can wrap the underlying replicates and aggregation so that the calibration code only sees the aggregated model outputs in the same dimensionality as model outputs without using any replicates [see [ 14 ] as an example of this].
Comparison of CaliPro with ABC-SMC
We present a summary of the differences between CaliPro and ABC in Table 3 . We also point to Figure 2 that further elucidates a decision flowchart for choosing between these methods or choosing them amongst the landscape of other available methods. Table 3 and Figure 2 provide a comprehensive comparison of CaliPro and ABC.
One significant difference between CaliPro and ABC, is the implementation difference of employing global or local sampling to explore parameter space when proposing parameter ranges. CaliPro typically uses Latin Hypercube Sampling (LHS), which is a global sampling technique [ 45 ]. ABC starts with global sampling using the initial priors and then progressively updates the priors using local sampling with sequential Monte Carlo weight adjustment; the weight adjustment methods are often called particle filtering or importance sampling. Global sampling using rejection sampling [ 46 ] is a simple method used with ABC but is generally considered too slow to be practical. Therefore, the approaches of CaliPro and ABC have complementary strengths: ABC is guaranteed to converge to the true posterior distribution with sufficient samples, but CaliPro requires fewer iterations and employs global sampling for all iterations.
The advantages of using CaliPro include a reduced risk of overfitting to partial experimental data by setting the pass constraints to accept simulated values that fall within ranges of experimental data. Secondly, CaliPro is often used with global parameter sampling such as Latin hypercube sampling (LHS) and therefore samples broadly from parameter space [ 45 ], which more robustly captures wider ranges of experimental outcomes. Lastly, CaliPro is resilient to holes in outcome space because such outcomes are classified into the fail set to inform future sampling. | Discussion
Calibration of complex models often needs to be performed when building complex models or when adding equations or reparametrizing. Both CaliPro and ABC rely on pseudo-likelihood to tune model parameters so that they capture full ranges of biological and clinical outcomes. As we detail in Table 3 , CaliPro’s use of binary constraints make it possible to encode any number of constraints from experimental and synthetic datasets. In larger models, applying these constraints is often done in stages: once the pass rate is sufficiently high, more stringent constraints can be applied at later stages of calibration. However, the system of binary constraints also limits CaliPro, but not limit ABC. Using CaliPro, a single “bad” parameter value rejects the entire parameter set whereas the local search of ABC-SMC particle weighting can help adjust and improve the “bad” parameter value. Said another way, the discrete binary encoding of CaliPro is not smooth and can propose narrower parameter sets than ABC. However, the ADS and HDR functions smooth these discrete pass-fail results into adjusted parameter ranges. Lastly, ABC is more resilient to getting stuck on local maxima; CaliPro relies on using replicates to mitigate the effects of local maxima.
The two different examples we discussed highlights cases where one method performs better than the other. CaliPro’s LHS assumes independent parameters. When calibrating the two dependent parameters of the predator-prey model, CaliPro oscillates between two sets of these two parameters highlighting the behavior one may encounter where the parameters being sampled violate the parameter independence assumption. Fixing one of the dependent parameters to calibrate the other is necessary in such a case. Conversely, ABC-SMC assumes complete data and seems to have trouble with not-a-number (NaN) model output values. Some of these errors were mitigated by using the simpler Metropolis–Hastings kernel of PyMC instead of the default multivariate Normal kernel. CaliPro accommodates such incomplete simulation output by assigning such parameters into the fail set for updating parameter ranges for the next iteration. Thus, CaliPro is resilient to such discontinuities in parameter space.
The SMC sampler in ABC makes the technique useful for slow models and/or exploring high dimensional parameter space, because unlike most other Bayesian samplers where adding more chains serve only to check intra- and inter-chain parameter variance, each SMC chain adjusts particle weights to effectively explore more parameter space. Frameworks like pyABC further offer the unique feature of adaptively spawning more chains to minimize sampler wall time.
We encountered several practical challenges with using ABC software. Surprisingly, even when starting from well-behaved, published parameter ranges of the immune-HIV-1/AIDS model, the pyABC software would never complete calibration, even though 75–84% of simulations passed CaliPro’s more relaxed Boolean criterion of data-fitting. We encountered two sources of failures with pyABC: “prior density zero” errors during sampling, and memory resource exhaustion due to limited control of the adaptive population samplers. PyMC, another ABC-SMC framework, was able to complete calibration but we had challenges troubleshooting errors when attempting complex distance functions to multiple patient timeseries trajectories because computation is deferred until much later during execution making it difficult to relate error messages to the relevant code. The complex implementations of both pyABC and PyMC makes it difficult to reason about sources of model fitting errors. Therefore, the immune-HIV-1/AIDS model example highlights the simplicity and usefulness of CaliPro for earlier stages of parameter inference.
Tuning stochastic models requires aggregating additional model simulations to reduce epistemic uncertainty on parameter tuning. As mentioned, the simplest way to integrate stochastic models into calibration frameworks is to make the stochasticity blind to the calibration framework by wrapping the model replicates to produce aggregated output with the same dimensionality as unaggregated output.
Further work for developing CaliPro would entail improving the numerical complexities with fitting small distributions with draws close to zero. To work around the numerical stability of fitting these small draws to distributions, we used rescaling factors, but appropriately using rescaling factors is specific to the type of distribution being fit. Rather than using rescaling factors, an alternative approach that can be used in such probability algorithm implementations is to convert to log-scale for fitting and converting back after fitting. Besides fitting small distribution values, another complexity arising from using the optimizer is choosing useful initial values of the distribution parameters. More work is necessary to automatically choose initial values or find an off-the-shelf software library that provides this feature. Such work toward improving the distribution tuning method of using both percentiles and draws allows CaliPro to meaningfully tune non-uniform parameters from a limited number of simulations. | Author contributions
PN: Methodology, Software, Visualization, Writing—original draft, Writing—reviewing and editing. DK: Conceptualization, Funding acquisition, Supervision, Writing—review and editing.
Mathematical and computational models of biological systems are increasingly complex, typically comprised of hybrid multi-scale methods such as ordinary differential equations, partial differential equations, agent-based and rule-based models, etc. These mechanistic models concurrently simulate detail at resolutions of whole host, multi-organ, organ, tissue, cellular, molecular, and genomic dynamics. Lacking analytical and numerical methods, solving complex biological models requires iterative parameter sampling-based approaches to establish appropriate ranges of model parameters that capture corresponding experimental datasets. However, these models typically comprise large numbers of parameters and therefore large degrees of freedom. Thus, fitting these models to multiple experimental datasets over time and space presents significant challenges. In this work we undertake the task of reviewing, testing, and advancing calibration practices across models and dataset types to compare methodologies for model calibration. Evaluating the process of calibrating models includes weighing strengths and applicability of each approach as well as standardizing calibration methods. Our work compares the performance of our model agnostic Calibration Protocol (CaliPro) with approximate Bayesian computing (ABC) to highlight strengths, weaknesses, synergies, and differences among these methods. We also present next-generation updates to CaliPro. We explore several model implementations and suggest a decision tree for selecting calibration approaches to match dataset types and modeling constraints. | Acknowledgments
We thank Dr. Linus Schumacher for helpful discussions on this manuscript and ABC-SMC. We thank Paul Wolberg for computational assistance and support.
Funding
The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This research was supported by the NIH grants R01 AI50684 (DK) and was supported in part by funding from the Wellcome Leap Delta Tissue Program (DK). This work used Anvil at the Purdue University Rosen Center for Advanced Computing through allocation MCB140228 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which was supported by the National Science Foundation (NSF) grants #2138259, #2138286, #2138307, #2137603, and #2138296.
Data availability statement
The original contributions presented in the study are archived on Zenodo [ 32 ]. Further inquiries can be directed to the corresponding author. | CC BY | no | 2024-01-16 23:35:04 | Front Appl Math Stat. 2023 Oct 18; 9:1256443 | oa_package/1b/30/PMC10785782.tar.gz |
||
PMC10785826 | 38223701 | Introduction
Abiotic factors such as salt, heat, and drought stresses, and nutrient deficiency are responsible for extensive crop loss and soil degradation, resulting in an estimated $27B annual loss ( Pitman and Lauchli, 2002 ; Hoang et al., 2016 ; Zorb et al., 2019 ). For example, salt stress is a primary abiotic stress, estimated to impact over 20% of irrigated agricultural land, with >50% of arable land anticipated to be salt affected by 2050 ( Wang et al., 2003 ; Munns and Tester, 2008 ; Jamil et al., 2011 ). The abundance of soil salinization stems from multiple elements, including inadequate agricultural practices (e.g., irrigation), land degradation (e.g., evaporates), and adverse climatic conditions (e.g., drought, rising sea levels, etc.). Furthermore, the accumulation of salt in the soil leads to salt and drought stress responses in plants that drastically impede the plant’s fitness, with effects ranging from decreased yield, decreased nutrient acquisition, and impeded root and shoot development to cell oxidation, nutrient imbalance, ion toxicity, and chlorophyll degradation ( Munns and Tester, 2008 ; Shahbaz and Ashraf, 2013 ). Therefore, it is imperative to implement sustainable agricultural strategies to limit the loss in global crop production due to these stresses.
Rice is one of the most susceptible crops to high salt concentrations ( Munns and Tester, 2008 ; Hanin et al., 2016 ; Hoang et al., 2016 ). It is estimated that rice yield is reduced by almost 30–50% annually due to salt stress, and climate change will likely worsen it ( Eynard et al., 2004 ). So naturally, numerous studies have been conducted on rice to investigate the physiological responses and molecular mechanisms under high salt concentrations (100–250 mM NaCl) ( Li et al., 2017 ; Chandran et al., 2019 ; Liu et al., 2019 ). These studies in rice have identified several salt stress-responsive and -tolerance genes, contributing to our understanding of the genetic pathways(s) regulating the plant’s response and adaptation to salt stress ( Kawasaki et al., 2001 ; Chandran et al., 2019 ; Razzaque et al., 2019 ; Mansuri et al., 2020 ; Farooq et al., 2021 ). However, developing salt-tolerant crops through transgenic technologies and conventional breeding approaches can be labor-intensive and time-consuming. Furthermore, some salt-tolerant plants developed using these approaches have had limited success under field saline conditions ( Jamil et al., 2011 ). Therefore, one option is to simultaneously utilize alternative approaches to promote sustainable agriculture, such as using plant-beneficial microbes.
Plants can benefit from associations with multiple microbes, including mycorrhizal fungi and plant growth-promoting bacteria (PGPB) ( Santi et al., 2013 ; Pankievicz et al., 2019 ). Several studies have shown that PGPB can promote plant growth via biological nitrogen fixation, hormone synthesis, protection against biotic and abiotic stresses, phosphate solubilization, iron sequestration, etc. ( Glick, 2012 ; Olanrewaju et al., 2017 ; Backer et al., 2018 ; Pankievicz et al., 2021 ). As a result, some of these PGPB (e.g., Azospirillum , Burkholderia , Herbaspirillum ) are already used as an inoculum in agriculture worldwide. In addition, studies in different plant systems (e.g., wheat, rice, Arabidopsis thaliana ) have reported that inoculation with PGPB such as A. brasilense and P. phytofirmans can promote growth and nutrient uptake under high salinity conditions ( Hamdia et al., 2004 ; Pinedo et al., 2015 ; Ilangumaran and Smith, 2017 ; Kumar et al., 2020 ; Kruasuwan et al., 2023 ). However, little is known about how these PGPB mediate salt stress in the host plants at a molecular level.
Previously, we established an experimental system in which A. brasilense Sp245 can promote rice growth under in-vitro conditions ( Thomas et al., 2019 ). In the current study, using this system, we investigated if A. brasilense Sp245 could promote rice (Nipponbare cv.) growth under high salt concentrations (100 mM and 200 mM NaCl). We hypothesized that A. brasilense would improve rice growth under high salt stress and regulate host gene expression. We also hypothesized that the expression pattern of rice genes involved in stress and defense responses, hormone signaling, and nutrient transport would be regulated during this process. Understanding the underlying molecular mechanisms via which A. brasilense improves rice growth under salt stress will be vital as we develop strategies to grow crops under harsh environmental conditions. | Materials and methods
Plant material, growth conditions, salt treatment, and bacterial inoculation
The plant preparation and growth conditions in this study were similar to previous studies ( Hiltenbrand et al., 2016 ; Thomas et al., 2019 ; Wiggins et al., 2022 ). First, the wild-type rice ( Oryza sativa cv. Nipponbare) seeds were surface-sterilized in a 70% (v/v) sodium hypochlorite solution for approximately fifteen minutes, rinsed five times with distilled water and then imbibed overnight (no less than 12hr) in sterile, distilled water. Next, the sterilized rice seeds were placed onto sterile germination paper (Anchor Paper, Saint Paul, MN, USA) placed in 9-cm Petri plates (#633185, Greiner bio-one, Monroe, NC, USA), sealed with parafilm (L-2020-1, BioExpress, Kaysville, UT, USA), and allowed to germinate in the dark for approximately 72–96 hrs. The germinated seedlings were transferred onto 15-cm Petri plates (#639102, Greiner bio-one, Monroe, NC, USA) containing low-N 2 Fahraeus medium (FM) and grown inside a Percival growth chamber (#CU-22L, Perry, IA, USA) with a 16-h, 22°C day and 8-h, 24°C night cycle and 150–200 μmol m −2 s −1 light intensity, with 65% humidity for seven days. Next, the rice seedlings were transferred to new sterile 15-cm Petri dishes containing FM supplemented with different amounts of NaCl (0-, 100-, and 200 mM). These salt concentrations represent high salt stress and have been used in previous salt stress studies in rice ( Kawasaki et al., 2001 ; Li et al., 2017 ; Chandran et al., 2019 ; Liu et al., 2019 ). The rice roots were inoculated with or without A. brasilense Sp245 as done previously ( Thomas et al., 2019 ; Wiggins et al., 2022 ). Briefly, bacteria were grown on Tryptone Yeast Extract (TY) media at 30° C to an optical density (600 nm) of 0.6 ( Schnabel et al., 2010 ; Bible et al., 2015 ; Mitra et al., 2016 ). Next, the bacterial cells were resuspended in sterile water before the rice seedlings were inoculated with 10 8 cells/ml of A. brasilense Sp245. We used sterile water as a mock treatment (control) in these experiments. The plants were grown in the Percival growth chamber under the abovementioned conditions for seven or fourteen days before the plant mass phenotypes were recorded. These experiments included at least three biological replicates. Plant masses were individually analyzed via One-Way ANOVA using JMP v15 software (SAS Institute Inc., NC, USA) among all four treatments: control, salt only, A. brasilense only, A. brasilense + salt. For post-hoc analysis, planned comparisons were designed as follows: control treatment vs. salt-only treatment, control treatment vs. A. brasilense -only treatment, and Salt-only treatment vs. A. brasilense + salt treatments.
RNA-sequencing and data analysis
Briefly, the samples for the RNA-seq experiment include: (a) rice (Nipponbare cv.) + mock inoculation; (b) rice (Nipponbare cv.) + Azospirillum brasilense ; (c) rice (Nipponbare cv.) + 200 mM NaCl treatment; (d) rice (Nipponbare cv.) + Azospirillum brasilense + 200 mM NaCl treatment. Total RNA was extracted from rice roots seven days post-treatment (dpt) from the abovementioned treatment groups using Qiagen RNeasy ® Plant Mini Kit (Cat #74904, California, USA) per manufacturer’s protocol. The RNA samples were then treated with Ambion ® DNA-free TM DNase Treatment and Removal (Cat #AM1906, California, USA) kit, and RNA concentrations and purities were measured using the NanoDrop 2000 spectrophotometer (Thermo Scientific, Delaware, USA). Three biological replicate samples were obtained for each treatment group.
Total RNA isolated from the different samples were sent to Novogene Genomics Services and Solutions (CA, USA) for the RNA integrity tests, library preparation, and sequencing. The messenger RNA was purified from total RNA samples using poly-T oligo-attached magnetic beads for poly-A enrichment. The libraries were checked with Qubit and real-time PCR for quantification and bioanalyzer for size distribution detection. The quantified libraries were pooled and sequenced on Illumina Novaseq platform, and paired-end reads were generated. The raw sequence data is publicly available from the Sequence Read Archive (SRA) under the BioProject accession ID PRJNA962515. Data analysis was performed as done previously ( Thomas et al., 2019 ; Wiggins et al., 2022 ). Briefly, raw 150 base pairs paired-end reads were processed using Trimmomatic (version 0.39) ( Bolger et al., 2014 ) to remove Illumina adapter and PCR primer sequences, remove leading and trailing bases with a low-quality score, trim read ends if the average quality per base drops below a specified threshold (Phred score 15), and drop reads shorter than 50 base pairs after applying all filtering steps. Reads were aligned to the rice genome ( Oryza sativa ) using Tophat (version 2.0.12) ( Trapnell et al., 2009 ) allowing two base pairs mismatches per read (default parameter). Aligned reads were quantified by gene loci and normalized to fragments per kilobase of transcripts per million mapped reads (FPKM) values using cufflinks (version 2.2.1) ( Trapnell et al., 2010 ). Oryza sativa reference genome (version 7) and gene annotations for 55986 loci were obtained from the Rice Genome Annotation Project ( Kawahara et al., 2013 ). Differential expression analysis was performed using cuffdiff (part of the Cufflinks software), and significant differentially expressed genes (DEGs) were defined as those with false discovery rate (FDR) <0.05 and absolute fold-change greater than 1.5 (|FC|>1.5). We performed three comparisons: 1) rice + mock inoculation (or control) against rice + A. brasilense inoculation; 2) rice + mock inoculation (or control) against rice + 200 mM NaCl treatment; 3) rice + mock inoculation (or control) against rice + 200 mM NaCl treatment + A. brasilense inoculation. The DEGs in each of the three comparisons were further tested for over-represented gene ontology (GO) terms to associate differential expression profiles with biological interpretations.
Reverse-transcriptase- PCR
We validated the expression of eight genes identified via RNA-seq using reverse-transcriptase polymerase chain reaction (RT-PCR). Genes validated via RT-PCR include brassinosteroid insensitive-1 receptor-like kinase, phytosulfokine precursor, WRKY27 transcription factor, MYB-family transcription factor, ent-Kaurene synthetase, naringenin, 2-oxoglutarate 3-deoxygenase, bZIP transcription factor, and cam1-calmodulin kinase. Three cDNA sets of each treatment group were synthesized from 300 ng of pure RNA samples using Thermo Scientific RevertAid RT Kit (#K1691) with Oligo(dT) 18 primers per manufacturer’s instructions. Before cDNA synthesis, we processed the RNA samples with Ambion ® DNA-free DNase Treatment and Removal kit (Cat #AM1906, Foster City, CA, USA). The primers for the RT-PCR are included in Supplementary Table 5 . We used Cyclophilin as an internal reference gene in these experiments. These experiments were performed in at least three biological replicates. | Results
Azospirillum brasilense inoculation improves rice growth when grown under high salt concentrations
We investigated if A. brasilense Sp245 could improve growth in rice plants when grown under high-salt concentrations (100 mM and 200 mM NaCl). At seven days post-treatment (dpt), our results show that the total and root plant mass increased in A. brasilense -treated salt-stressed (200 mM NaCl) plants when compared to salt-stressed (200 mM NaCl) plants (one-way ANOVA, F 3,265 = 45.0 and 30.3, p<0.0001; planned contrast, p=0.0567 and p=0.0014, respectively) ( Figures 1A , B ). In addition, at seven dpt, the total and root plant mass increased in A. brasilense -treated salt-stressed (100 mM NaCl) plants when compared to salt-stressed (100 mM NaCl) plants (one-way ANOVA, F 3,245 = 35.6 and 23.9, p<0.0001; planned contrast, p=0.002 and p=0.0034, respectively) ( Figures 1C , D ). Other treatments included plants exposed to salt stress (200 mM and 100 mM NaCl) only, plants inoculated with A. brasilense but not exposed to salt stress, and plants exposed to mock treatment (water) ( Figure 1 ). As expected, plants exposed to salt stress (200 mM and 100 mM NaCl) only had significant reduction in total and root mass compared to the mock controls (one-way ANOVA, F 3,265 = 45.0 and 30.3, p<0.0001; planned contrast, p<0.0001) ( Figures 1A , B ); (one-way ANOVA, F 3,245 = 35.6 and 23.9, p<0.0001; planned contrast, p<0.0001, respectively) ( Figures 1C , D ). Similarly, plants treated with A. brasilense had significant increase in mass compared to the mock controls (one-way ANOVA, F 3,265 = 45.0 and 30.3, p<0.0001; planned contrast, p=0.0019 and p=0.0006, respectively) ( Figures 1A , B ); (one-way ANOVA, F 3,245 = 35.6 and 23.9, p<0.0001; planned contrast, p=0.0011 and p=0.0006, respectively) ( Figures 1C , D ).
Next, we investigated if A. brasilense could improve plant growth at a later time point, fourteen dpt, in rice plants when grown under high-salt concentrations (100 mM and 200 mM NaCl). At fourteen (dpt), our results show that the total and root plant mass increased in A. brasilense -treated salt-stressed (200 mM NaCl) plants when compared to salt-stressed (200 mM NaCl) plants (one-way ANOVA, F 3,258 = 24.8 and 13.9, p<0.0001; planned contrast, p=0.087 and p=0.1, respectively) ( Figures 2A , B ). In addition, at fourteen dpt, the total and root plant mass increased in A. brasilense -treated salt-stressed (100 mM NaCl) plants when compared to salt-stressed (100 mM NaCl) plants (one-way ANOVA, F 3,169 = 13.9 and 11.9, p<0.0001; planned contrast, p=0.0007 and p=0.0039, respectively) ( Figures 2C , 1D ). Other treatments included plants exposed to salt stress (200 mM and 100 mM NaCl) only, plants inoculated with A. brasilense but not exposed to salt stress, and plants exposed to mock treatment (water) ( Figure 2 ). Like earlier, the plants exposed to salt stress (200 and 100 mM NaCl) had a significant reduction in total and root mass compared to the mock controls (one-way ANOVA, F 3,258 = 24.8 and 13.9, p<0.0001; planned contrast, p<0.0001 and p=0.0001, respectively) ( Figures 2A , B ); (one-way ANOVA, F 3,169 = 13.9 and 11.9, p<0.0001; planned contrast, p=0.0001 and p=0.0091, respectively) ( Figures 2C , D ). Similarly, plants treated with A. brasilense had a significant increase in mass compared to the mock controls (one-way ANOVA, F 3,258 = 24.8 and 13.9, p<0.0001; planned contrast, p=0.0099 and p=0.01, respectively) ( Figures 2A , B ); (one-way ANOVA, F 3,169 = 13.9 and 11.9, p<0.0001; planned contrast, p=0.0247 and p=0.0019, respectively) ( Figures 2C , D ). In conclusion, our results show that A. brasilense inoculation improved growth in rice plants under salt stress.
The rice root transcriptomes under the different treatments
Utilizing RNA sequencing, we identified differentially expressed genes (DEGs) in rice roots for these samples, (a) rice (Nipponbare cv.) + Azospirillum brasilense vs. rice (Nipponbare cv.) + mock inoculation (water only); (b) rice (Nipponbare cv.) + 200 mM NaCl vs. rice (Nipponbare cv.) + mock inoculation (water only); (c) rice (Nipponbare cv.) + Azospirillum brasilense + 200 mM NaCl vs. rice (Nipponbare cv.) + mock inoculation (water only). We collected the data at seven dpt, and each sample included three biological replicates. Sequencing was performed using a 150 base pairs paired-end approach on an Illumina NovaSeq platform. An average of 22 million raw reads were obtained per sample, of which 96% survived quality control. The filtered reads had an 89.6% average mapping rate to the rice genome (Michigan State University, version 7) ( Table 1 ). We identified the DEGs in the different comparisons using the thresholds FDR-adjusted P-value of <0.05 and absolute fold change of 1.5. For the A. brasilens e-treated samples we identified 786 DEGs ( Figure 3A and Supplementary Table 1 ), for the salt-stressed samples we identified 4061 DEGs ( Figure 3B and Supplementary Table 2 ), and for the A. brasilense -treated salt-stressed samples we identified 1387 DEGs ( Figure 3C and Supplementary Table 3 ).
We performed a gene ontology (GO) analysis to understand the biological significance of these DEGs. Using AgriGO, we analyzed the DEGs for over-represented biological processes (BP), molecular functions (MF), and cellular components (CC) terms ( Tian et al., 2017 ). First, we identified twenty GO terms that were significantly enriched in A. brasilense -treated rice only. These included eight enriched in biological processes (e.g., response to biotic stimulus, response to endogenous stimulus, photosynthesis, response to stimulus, metabolic process, secondary metabolic process, etc.), four in molecular functions (e.g., catalytic activity, lipid binding, oxygen binding, and transferase activity), and eight in cellular components (e.g., endoplasmic reticulum, cell, cell wall, extracellular region, external encapsulating structure, etc.) ( Figure 4A ). Next, we identified seventeen GO terms that were significantly enriched in rice under the salt-stress treatment only. These included eight enriched in biological processes (e.g., response to abiotic stress, generation of precursor metabolites and energy, secondary metabolic process, photosynthesis, etc.), three in molecular functions (e.g., catalytic activity, transferase activity, and oxygen binding), and six in cellular components (e.g., cytoplasm, extracellular region, cell, thylakoid, etc.) ( Figure 4B ). Finally, we identified nineteen GO terms that were significantly enriched in A. brasilense -treated salt-stressed rice. These included eleven enriched in biological processes (e.g., carbohydrate metabolic process, response to endogenous response, response to stimulus, secondary metabolic process, metabolic process, etc.), five in molecular functions (e.g., transferase activity, lipid binding, transporter activity, oxygen binding, and catalytic activity), and three in cellular components (e.g., extracellular region, cell wall, and external encapsulating structure) ( Figure 4C ).
We identified 786 DEGs in rice roots during associations with A. brasilense , seven dpt ( Supplementary Table 1 ). Among these were several defense-related genes (e.g., pathogenesis-related genes, defensins, thionins, chitinases, and cinnamoyl-CoA-reductase) and genes involved in the flavonoid pathway (e.g., naringenin synthase, chalcone synthase, chalcone-flavonone isomerase, and flavanol synthase). We also identified the differential expression of transcription factors (TFs) (e.g., MYB, WRKY, AP2/ERFs), protein kinases (PK) (e.g., calcium/calmodulin-dependent kinases, SHR5 receptor-like kinases, and wall-associated kinases), and transporters (e.g., nodulins, sugar transporters, amino acid and peptide transporters, ammonium transporters, and high-affinity nitrate transporters). Finally, we identified some hormone-related genes involved in auxin, gibberellin, ethylene, and phytosulfokine signaling in this dataset.
We identified 4061 DEGs in rice roots when grown under salt stress (200 mM NaCl), seven dpt ( Supplementary Table 2 ). Among them were several defense-related genes (e.g., chitinase genes, thionin genes, and defensin genes) and hormonal genes involved in stress responses and regulation (e.g., abscisic acid (ABA), jasmonic acid (JA)). We also identified the differential expression of transcription factors (TFs) (e.g., AP2/ERFs, BHLH, WRKY, MYB, bZIP, and zinc-finger families). Finally, we identified differential expression of antioxidant genes (e.g., catalase, peroxidase, and superoxide dismutase (SOD)) and key ion-transporters (e.g., high-affinity sodium transporters (HKT1/2), cation transporters, and potassium transporter proteins).
We identified 1387 DEGs in rice roots exposed to salt stress (200 mM NaCl) and treated with A. brasilense , seven dpt ( Supplementary Table 3 ). Among them was the differential expression of genes in the flavonoid biosynthesis pathway (e.g., naringenin synthase, chalcone synthase, and flavonone synthase), several hormone-related genes (e.g., GH3, phytosulfokines, auxin efflux carriers, auxin-responsive genes, ethylene insensitive 2 (EIN2), 1-aminocyclopropane-1carboxylate (ACC) oxidase, and cytokinin-O-glucosyltransferases), transcription factors (TFs) (e.g., WRKY, bZIP, MYB, zinc fingers), and protein kinases (PK) (e.g., calcium/calmodulin-dependent kinases, wall-associated kinases, and SHR5 receptor-like kinases). In addition, we identified several antioxidant genes (e.g., APX, catalases, and peroxidases) and transporters (e.g., sugar transporters, high-affinity nitrate transporters, ammonium transporters, nodulins, auxin efflux carriers, and ion transporters).
Comparison of the rice transcriptomes under different treatments
Next, we compared the DEGs in the different comparisons to identify the underlying gene expression trends. We performed the following comparisons: 1) DEGs identified in salt-stressed rice vs. DEGs identified in A. brasilense -treated salt-stressed rice, and 2) DEGs identified in A. brasilense -treated rice vs. DEGs identified in A. brasilense -treated salt-stressed rice.
In the first comparison, we identified 917 genes differentially expressed in both treatments ( Figure 5A ). Among these, 228 genes were differentially expressed under all three treatments, and 689 genes were differentially expressed in salt-stressed plants, and A. brasilense -treated salt-stressed plants ( Supplementary Table 4 ). The list of 689 DEGs included numerous salt stress-response and tolerance genes, including genes involved in abscisic acid and jasmonic acid signaling, genes encoding antioxidant enzymes, and genes involved in sodium and potassium transport and calcium signaling, among others ( Supplementary Table 4 ). In addition, we performed gene ontology (GO) analysis and identified GO terms involved in response to oxidative stress, response to stress, and transmembrane transport, among others ( Figure 5B ).
In the second comparison, we identified 326 genes differentially expressed in both treatments ( Figure 5A ). Among these, 228 genes were differentially expressed under all three treatments, and 98 genes were differentially expressed in plants treated with only A. brasilense , and A. brasilense -treated salt-stressed plants ( Supplementary Table 4 ). The list of 98 DEGs included genes involved in defense and stress response, hormone signaling pathways, flavonoid biosynthesis pathway, and nutrient transporters such as nitrate, ammonium, and sugar transporters, among others ( Supplementary Table 4 ). In addition, we performed gene ontology (GO) analysis and identified GO terms involved in oxidoreductase activity, antioxidant activity, and transporter activity, among others ( Figure 5C ).
Validation of gene expression patterns via RT-PCR
Lastly, we validated the expression patterns of eight differentially expressed genes among our three treatment groups. Genes validated via RT-PCR include brassinosteroid insensitive-1 receptor-like kinase, phytosulfokine precursor, WRKY27 transcription factor, MYB-family transcription factor, ent-Kaurene synthetase, naringenin 2-oxoglutarate 3-deoxygenase, bZIP transcription factor and cam1-calmodulin kinase ( Figure 6 ). Our RT-PCR results confirm the gene expression trends observed in the RNA-seq experiment. | Discussion
Soil salinity is considered one of the most limiting factors for agricultural productivity and food security ( FAO, 2015 ; FAO et al., 2022 ). In fact, rice is one of the most susceptible crops to high salt concentrations and experiences a significant yield reduction due to salinity stress ( Eynard et al., 2004 ; Munns and Tester, 2008 ; Rapaport et al., 2013 ; Hanin et al., 2016 ; Hoang et al., 2016 ). Several studies have shown that PGPB can improve salt tolerance in plants and promote their growth ( Hamdia et al., 2004 ; Pinedo et al., 2015 ; Ilangumaran and Smith, 2017 ; Kumar et al., 2020 ; Kruasuwan et al., 2023 ). However, the underlying molecular mechanisms by which PGPB improve salt stress tolerance are largely unknown. In this study, we show that the PGPB Azospirillum brasilense improves rice growth under high salt-stress conditions. Furthermore, we used a transcriptomic approach to identify the genetic pathways contributing to A. brasilense -mediated salt tolerance in rice. Below we discuss some of our findings from this study.
Before proceeding with the gene expression experiments, we investigated if A. brasilense could improve salt tolerance in rice under our experimental conditions. So first, we showed that the high-salt (100 mM and 200 mM NaCl) treatments impeded plant growth in rice, seven and fourteen dpt. These results were expected, as we utilized a salt-susceptible rice cultivar, Nipponbare cv., that has been shown to display severe yield losses under moderate-high salt concentrations ( Karan et al., 2012 ; Shankar et al., 2016 ). Next, we showed that A. brasilense inoculation improved rice growth at seven and fourteen dpt, as shown previously ( Thomas et al., 2019 ). Finally, we showed that rice plant mass improved in A. brasilense -treated salt-stressed plants compared to salt-stressed (100 and 200 mM NaCl) uninoculated plants at seven and fourteen dpt. Unsurprisingly, our results showed a greater improvement in plant mass in A. brasilense -treated salt-stressed plants compared to salt-stressed uninoculated plants with the lower salt concentration (100 mM NaCl) at both time points. However, our results also indicate that at seven dpt, the salt treatment has a more drastic effect on plant mass than at fourteen dpt at both concentrations. This was expected as several glycophytic crops are most susceptible to salt stress at early developmental stages ( Horie et al., 2001 ; Munns and Tester, 2008 ). The same applies to A. brasilense -treated salt-stressed plants, so over time the detrimental effect of salt is reduced under these conditions. These results suggest that A. brasilense inoculation played a role in improving salt stress tolerance in rice. However, the plant growth promotion effects observed in A. brasilense -treated salt-stressed plants were still lower than in A. brasilense -treated plants not exposed to salt stress. Nevertheless, we established an experimental system to investigate the molecular basis of plant growth promotion by A. brasilense when rice is grown under salt-stress conditions.
Next, we performed an RNA-seq experiment to identify the regulation of gene expression in rice roots upon three different treatments: 1) A. brasilense only, 2) 200 mM NaCl only, and 3) A. brasilense + 200 mM NaCl. For this study, we collected the transcriptomic data at seven dpt, as we observed that A. brasilense could improve rice growth under high salt concentration at this time point. We selected the higher concentration, 200 mM, of NaCl treatment as we observed that A. brasilense could mitigate salt stress in rice even at this concentration. In this study, we focused on rice roots as these tissues are the first point of interaction between the host plant and the symbiotic bacteria. Future studies can focus on identifying the transcriptomic responses in plant shoots to obtain a holistic understanding of the transcriptomic changes in the host plant. We identified hundreds of DEGs under each treatment. In this study, we focused on genes previously implicated in salt response and tolerance in rice and rice- A. brasilense interaction and compared their expression profiles across the treatments. Below we discuss some of these findings.
Genes involved in salt stress response and tolerance
Several reports have described the transcriptional responses in rice roots upon salt stress exposure ( Kumar et al., 2013 ; Shankar et al., 2016 ; Zhou et al., 2016 ; Chandran et al., 2019 ; Ma et al., 2022 ). These studies identified stress-responsive genes, transcription factors, genes involved in hormone signaling, and sodium and potassium transport differentially expressed in salt-stressed rice plants. In this study, we identified several salt stress response genes differentially expressed in rice roots treated with 200 mM NaCl. We also examined how their expression was regulated in rice roots upon A. brasilense treatment ( A. brasilense only and A. brasilense + salt) ( Supplementary Table 6 ).
Genes involved in ABA and JA signaling
Studies have shown that osmotic stress due to high salt concentration increases abscisic acid (ABA) biosynthesis, subsequently regulating gene expression in the ABA signaling pathway ( Kumar et al., 2013 ; Sah et al., 2016 ; Huang et al., 2021 ). In this study, we identified two phytoene synthase genes (LOC_Os12g43130 and LOC_Os06g51290), a 9-cis-epoxycarotenoid dioxygenase gene (LOC_Os07g05940), and a zeaxanthin epoxidase gene (LOC_Os04g37619) upregulated in salt-stressed plants. The 9-cis-epoxycarotenoid dioxygenase gene (LOC_Os07g05940) displayed higher expression compared to the other genes. These genes have been shown to be expressed in rice upon exposure to salt stress, and their expression correlated to the level of ABA in rice roots ( Li et al., 2008 ; Welsch et al., 2008 ; Farooq et al., 2021 ). None of these genes were expressed in rice roots treated with only A. brasilense . However, it was interesting to observe none of these genes were expressed in A. brasilense -treated salt-stressed rice. The expression pattern suggests that genes involved in the ABA signaling pathway in response to salt stress are altered upon A. brasilense treatment. Besides the ABA signaling pathway, salt stress response activates the jasmonate signaling pathway in several plants, including Arabidopsis and rice ( Riemann et al., 2015 ; Valenzuela et al., 2016 ; Delgado et al., 2021 ). For instance, salt stress causes increased levels of jasmonic acid in plant tissues and induces the expression of genes involved in jasmonic acid signaling ( Delgado et al., 2021 ). In this study, we identified two genes encoding for jasmonate-induced proteins (LOC_Os04g24328 and LOC_Os04g24319) upregulated in expression in rice roots exposed to salt stress. However, these genes were not expressed in salt-stressed plants treated with A. brasilense , suggesting the PGPB treatment alters the regulation of genes from the jasmonate signaling pathway that mediate salt stress responses.
Genes encoding antioxidant enzymes
Salt stress in plants leads to oxidative stress, causing reactive oxygen species (ROS) accumulation and reduced plant growth ( Qu et al., 2010 ; Hasanuzzaman et al., 2021 ). Plants deal with the adverse effects of oxidative stress via antioxidant enzymes such as catalases, glutathione-S-transferases (GST), and copper-zinc superoxide dismutases (SOD) ( Choudhury et al., 2017 ). Several studies in different plants have shown differential expression of genes encoding these antioxidant enzymes during salt stress ( Baltruschat et al., 2008 ; Vighi et al., 2017 ). In this study, we observed the upregulation in the expression of genes encoding catalases (LOC_Os03g03910 and LOC_Os02g02400), GSTs (e.g., LOC_Os01g49710, LOC_Os10g34020, LOC_Os01g49720, LOC_Os06g08670), and SODs (LOC_Os07g46990, LOC_Os08g44770, LOC_Os03g11960) in salt-stressed plants. However, the majority of these genes were not expressed in A. brasilense -treated salt-stressed plants. A similar expression pattern of genes encoding antioxidant enzymes was also detected in salt-susceptible rice cultivar IR29 upon inoculation with PGPB Streptomyces sp. GKU 895 ( Kruasuwan et al., 2023 ). The expression pattern of these genes suggests the PGPB treatment relieves stress in the host plant exposed to high salt concentrations.
Genes involved in sodium and potassium transport, and calcium signaling
Several genes involved in sodium and potassium ion transport are differentially expressed in salt-stressed plants ( Kumar et al., 2013 ; Zhou et al., 2016 ; Assaha et al., 2017 ; Zhang et al., 2018 ). The sodium transporter genes from the HKT family have been shown to play a role in salt tolerance in rice ( Horie et al., 2001 ; Ren et al., 2005 ; Horie et al., 2007 ; Zhou et al., 2016 ). We identified HKT1 (LOC_Os01g20160) and HKT2 (LOC_Os06g48810) genes downregulated in rice exposed to salt treatment only. The rice potassium channels, AKT1 and SKOR , have also been suggested to be involved in salt tolerance ( Golldack et al., 2003 ; Fuchs et al., 2005 ; Musavizadeh et al., 2021 ). Previously, it was reported AKT1 expression decreased in rice roots exposed to 150 mM NaCl ( Fuchs et al., 2005 ). We identified the AKT1 (LOC_Os01g45990) and SKOR (LOC_Os04g36740) potassium channels downregulated in salt-treated rice roots. Interestingly, HKT1 , HKT2 , and AKT1 were not differentially expressed in A. brasilense -treated plants, but SKOR was upregulated in A. brasilense -treated plants. In A. brasilense -treated salt-stressed plants, HKT2 was upregulated in expression, but HKT1 or SKOR were not differentially expressed. Taken together, the transcriptomic data suggest that A. brasilense inoculation regulates the expression of sodium and potassium transporters in salt-stressed plants. Studies have revealed that calcium signaling affects plant responses to salt stress ( Kader and Lindberg, 2010 ; Manishankar et al., 2018 ). Accumulating evidence suggests the involvement of a diverse array of calcium sensor proteins in different aspect of salt tolerance ( Kim et al., 2007 ; Seifikalhor et al., 2019 ). Calcineurin is a calcium and calmodulin-dependent serine/threonine phosphatase mediating salt stress tolerance in rice ( Ma et al., 2005 ). In this study, we identified two calcineurin genes (LOC_Os02g18930 and LOC_Os01g41510) downregulated in rice roots exposed to salt stress only. However, these genes were not differentially expressed in A. brasilense -treated salt-stressed rice roots. We also identified a calmodulin-related calcium sensor protein (LOC_Os04g41540) upregulated in expression in salt-stressed rice roots, but not expressed in A. brasilense -treated salt-stressed plants. The sodium/calcium exchanger protein family plays an important role in cellular calcium homeostasis ( Singh et al., 2015 ). While their functions are not completely understood in plants, emerging evidence suggests these play a role during stress responses. For instance, in rice, some of these genes were induced in expression by salt stress ( Singh et al., 2015 ; Yang et al., 2021 ). We identified two genes encoding sodium/calcium exchanger proteins (LOC_Os01g37690 and LOC_Os02g21009) differentially expressed in salt-stressed rice, but these were not expressed in rice treated with A. brasilense . These results suggest A. brasilense inoculation also modulates the expression pattern of genes involved in calcium sensing and homeostasis during salt stress.
Other salt -responsive and -tolerance genes
Recently using a genome-wide meta-analysis, including microarray and RNA-seq data, one study identified several promising candidate genes for salt tolerance in rice ( Mansuri et al., 2020 ). This list included a CBS domain-containing membrane protein (LOC_Os2g06410) and an expansin precursor gene (LOC_Os05g39990) involved in plant cell wall organization ( Mansuri et al., 2020 ). We noticed these genes were expressed in salt-stressed plants, with the CBS domain-containing membrane protein (LOC_Os2g06410) having a higher expression than the expansin precursor gene (LOC_Os05g39990). However, these genes were not expressed in plants treated with A. brasilense . Pentatricopeptide repeat (PPR) proteins are one of the largest protein families in plants and have been reported to be involved in plants’ response to different abiotic stresses, including salt ( Li et al., 2021 ). One recent study showed that the PPR-domain protein SOAR1 regulates salt tolerance in rice ( Lu et al., 2022 ). We identified several genes encoding PPR proteins (e.g., LOC_Os06g31300, LOC_Os10g10170, LOC_Os03g02430, LOC_Os03g11020) upregulated in salt-stressed plants. Interestingly, these genes were not expressed in A. brasilense -treated salt-stressed plants. The dehydration-responsive element binding ( DREB ) proteins are key regulators of abiotic stresses in plants ( Zhao et al., 2016 ; Song et al., 2021 ). One recent study showed DREB genes promote tolerance to heat, drought, and salt in rice ( Wang et al., 2022 ). We identified several DREB genes (e.g., LOC_Os09g35010, LOC_Os09g35030) upregulated in expression in salt-stressed rice. However, these genes were not differentially expressed in A. brasilense -treated salt-stressed rice.
Transcription factors (TF) families such as MYBs, WRKYs, ARFs, and zinc fingers have been shown to be involved in mediating salt stress in plants ( Kumar et al., 2013 ; Shankar et al., 2016 ; Zhou et al., 2016 ; Chandran et al., 2019 ). Some of these TFs were also differentially expressed in rice IR29 under salt stress and Streptomyces sp. GKU 895 treatments ( Kruasuwan et al., 2023 ). In this study, we observed the expression of MYBs (e.g., LOC_Os12g39640, LOC_Os07g02800, LOC_Os01g09280), WRKYs (e.g., LOC_Os06g30860, LOC_Os05g46020, LOC_Os01g60600), and zinc fingers (e.g., LOC_Os05g10670, LOC_Os06g15330, LOC_Os08g03310) in rice plants exposed to salt stress. Another study reported that some of these TFs (e.g., LOC_Os07g02800, LOC_Os01g60600, LOC_Os05g10670) were expressed in rice (Japonica cultivar Chilbo) roots treated with 250 mM NaCl for five days ( Chandran et al., 2019 ). While two (LOC_Os07g02800 and LOC_Os05g10670) of these TFs were expressed in A. brasilense -treated salt-stressed rice roots, the WRKY108 TF (LOC_Os01g60600) was not expressed in these samples. There were a few other TFs (e.g., LOC_Os12g39640, LOC_Os01g09280, LOC_Os06g30860, LOC_Os05g46020) which were differentially expressed in A. brasilense -treated salt-stressed rice roots. The expression pattern of these TFs suggests they likely play crucial roles in regulating the plant’s responses to salt and A. brasilense .
Expression of genes in rice- Azospirillum brasilense interaction
Previously, we identified several defense-related genes, flavonoid biosynthesis pathway genes, receptor-like kinases, and nitrate and sugar transporters differentially expressed in rice roots upon inoculation with A. brasilense at one- and fourteen- dpt ( Thomas et al., 2019 ). Therefore, it is no surprise that in the current study, some of these genes were also differentially expressed in A. brasilense -treated rice roots at seven dpt. Here we compared the expression pattern of some of these genes across the three different treatments ( Supplementary Table 6 ).
Defense- and stress– related genes
Defense- and stress-related genes are usually differentially expressed in plants upon both abiotic (e.g., salt, heat) and biotic (e.g., bacteria, fungi) exposure ( Atkinson and Urwin, 2012 ). For instance, defense- and stress-related genes are differentially expressed in plants when exposed to salt stress ( Kumar et al., 2013 ; Zhang et al., 2017 ; Ma et al., 2022 ). Interestingly, multiple studies have reported that host plants suppress the expression of some of their defense-related genes during beneficial plant-microbe interactions ( Soto et al., 2009 ; Toth and Stacey, 2015 ; Thomas et al., 2019 ; Mukherjee, 2022 ; Wiggins et al., 2022 ). For instance, during rice- A. brasilense interactions, defense genes such as thionins, pathogenesis-related (PR) genes, chitinases, and cinnamoyl-CoA-reductase genes were differentially expressed in rice roots ( Thomas et al., 2019 ). In this study, we identified the expression pattern of some defense genes under the three treatments. For instance, we observed a chitinase gene (LOC_Os05g33140) upregulated in expression under all three treatments, suggesting it is not specific for either A. brasilense or salt treatment. Interestingly, this gene’s expression was much higher in A. brasilense -treated salt-stressed plants than in the other plants. We found a thionin gene (LOC_Os06g31280) upregulated in expression to a similar level in both salt treatments (salt only and salt + A. brasilense ) but not differentially expressed in rice roots inoculated with A. brasilense only, suggesting its expression is specific for abiotic stress response. Next, we identified three defense-related genes [a stress-induced protein (LOC_Os01g53790), a pathogenesis-related (PR) gene (LOC_Os04g50700), and a cinnamoyl-CoA-reductase gene (LOC_Os08g34280)] downregulated in expression in A. brasilense -treated rice but not differentially expressed in either salt treatment (salt only and salt + A. brasilense ). These genes were also downregulated in expression in A. brasilense -treated rice at fourteen dpt ( Thomas et al., 2019 ). The expression pattern of these genes under these treatments highlights the specificity of these genes for rice- A. brasilense association. In conclusion, we observed differential expression of some defense- and stress-related genes under the three treatments. While some genes displayed an expression pattern specific to the salt treatment, others displayed an expression pattern specific to the A. brasilense treatment.
Genes involved in the flavonoid biosynthesis pathway and nutrient transport (nitrate, ammonium, and sugar)
Past reports have suggested the possible involvement of flavonoids during the rice- A. brasilense interaction ( Thomas et al., 2019 ; Mukherjee, 2022 ; Wiggins et al., 2022 ). In this study, we identified several genes from the flavonoid biosynthesis pathway differentially expressed in rice roots upon A. brasilense treatment. For example, a chalcone and stilbene synthase gene (LOC_Os07g34260), a key enzyme in the flavonoid biosynthesis pathway, was upregulated in expression in rice roots upon A. brasilense treatment. However, this gene was not differentially expressed under the two salt treatments (salt only and salt + A. brasilense ), suggesting that the high salt conditions likely interfered with its expression pattern. We observed a similar expression pattern with other genes (e.g., LOC_Os05g41645, LOC_Os10g39140) from the flavonoid pathway. In plant-PGPB interactions, the growth promotion effects the host plants experience are due to multiple factors, including improved nutrient uptake facilitated by transporters. Previous transcriptomic studies identified the expression of nitrate, ammonium, and sugar transporters in rice during interactions with A. brasilense ( Thomas et al., 2019 ; Wiggins et al., 2022 ). In this study, we identified two nitrate transporters (LOC_Os02g38230 and LOC_Os01g50820) upregulated in rice roots treated with A. brasilense only. These two nitrate transporters were also upregulated in rice roots treated with A. brasilense at one- and fourteen- dpt suggesting their significance in this interaction ( Thomas et al., 2019 ). In A. brasilense -treated salt-stressed rice the gene encoding the nitrate transporter (LOC_Os02g38230) was upregulated in expression, but not differentially expressed in plants exposed to salt stress only. In addition, we identified an ammonium transporter (LOC_Os12g08130) and a sugar transporter (LOC_Os11g05390) upregulated in expression in rice roots treated with A. brasilense . Interestingly, both transporters were also upregulated in expression in the A. brasilense -treated salt-stressed rice plants, while showing no differential expression in rice roots treated only with salt. Overall, these findings indicate that, although some A. brasilense -associated genes were regulated upon high salt treatment, key transporter genes likely involved in the rice- A. brasilense association were also expressed under salt stress. Furthermore, the expression profile of these nutrient transporters reinforces the phenotypic observations made earlier, where inoculation with A. brasilense improved plant growth in rice.
Genes involved in hormone signaling
Plant hormones, such as auxin and ethylene, are necessary for numerous biological processes, including growth, development, signaling, and response to stress ( Ferguson and Mathesius, 2014 ; Verma et al., 2016 ; Waadt et al., 2022 ). In fact, different plant hormones mediate salt stress responses to regulate plant growth adaptation ( Yu et al., 2020 ). Similarly, a few reports have also elucidated the importance of hormone signaling during plant-PGPB interactions ( Camilios-Neto et al., 2014 ; Brusamarello-Santos et al., 2019 ; Thomas et al., 2019 ; Mukherjee, 2022 ; Wiggins et al., 2022 ). Therefore, genes involved in different hormonal pathways will likely be regulated when plants are exposed to salt stress and A. brasilense . Here we compared the expression pattern of some genes from hormone signaling pathways across the different treatments ( Supplementary Table 6 ).
Auxins are major regulators of plant growth and development and responses to diverse biotic and abiotic stresses ( Verma et al., 2016 ; Waadt et al., 2022 ). Naturally, they are involved in plant development in response to salt stress conditions. Some studies have indicated that under salt stress, plants have decreased auxin levels and auxin transporter expression ( Du et al., 2012 ; Liu et al., 2015 ). In this study in plants under salt stress-only conditions, we observed a diverse array of auxin-related genes (e.g., auxin response factors, auxin-induced proteins, auxin efflux carriers, Auxin-responsive SAUR genes) differentially expressed. A good portion of these genes was downregulated in expression. Auxin is also involved in different plant-PGPB interactions. In some plant-PGPB interaction reports, some auxin-responsive genes were downregulated in expression ( Thomas et al., 2019 ; Mukherjee, 2022 ; Wiggins et al., 2022 ). We observed some genes involved in auxin signaling and biosynthesis (e.g., auxin-responsive protein, auxin-induced protein, flavin monooxygenase) downregulated in expression in A. brasilense -treated plants. Surprisingly, there was almost no overlap in the expression of these auxin-related genes between salt-only and A. brasilense -only treatments. Nearly all the genes (e.g., LOC_Os04g36054, LOC_Os01g36560, LOC_Os01g70050, LOC_Os12g43110) differentially expressed in salt stress-only treatment were not differentially expressed in response to A. brasilense -only treatment, and vice versa (e.g., LOC_Os08g44750, LOC_Os04g03980, LOC_Os01g16714). This expression pattern suggests that auxin signaling pathways mediating the plant responses and adaptation to salt and A. brasilense are likely separate. Furthermore, we noticed a consistent pattern in the expression of these genes (e.g., LOC_Os09g32770, LOC_Os08g42198, LOC_Os06g07040, LOC_Os01g55940) in plants exposed to both salt and A. brasilense . Almost all the genes differentially expressed under the combination treatment were expressed in the salt- or A. brasilense -only treatment and displayed the same expression pattern. These observations suggest that auxin-related genes specific to salt and A. brasilense treatments affect the rice transcriptome and mediate the plant’s responses and adaptations.
Ethylene is another major plant hormone that regulates multiple aspects of plant biology, including responses to biotic and abiotic stresses ( Verma et al., 2016 ; Chen et al., 2022 ; Waadt et al., 2022 ). These studies have reported that ethylene levels are increased in plants leading to impaired growth in response to different stresses. The 1-aminocyclopropane-1-carboxylate (ACC) oxidase gene is essential for the ethylene biosynthesis pathway ( Houben and Van De Poel, 2019 ). In this study, we observed several genes encoding ACC (e.g., LOC_Os09g27750, LOC_Os04g10350, LOC_Os08g30210, LOC_Os05g05680) differentially expressed in rice exposed to salt stress only but not expressed in A. brasilense -treated rice plants. Similarly, we observed differential expression of two ACC genes (LOC_Os02g53180 and LOC_Os11g08380) in A. brasilense -treated plants, but not in plants exposed to salt stress only. A similar expression pattern was also observed for other ethylene-related genes (e.g., LOC_Os04g41570, LOC_Os07g06130, LOC_Os04g08740). These findings are similar to our earlier observation with auxin-related genes and suggest there are separate ethylene signaling pathways mediating the plant’s responses to salt and A. brasilense . Furthermore, in plants exposed to salt and A. brasilense , both classes (salt-specific and A. brasilense -specific) of genes (e.g., LOC_Os04g10350, LOC_Os08g30210, LOC_Os11g08380) were differentially expressed. Overall, these findings signify that regulation of phytohormone pathways in rice roots is essential for maintaining the beneficial association with A. brasilense and improving plant growth under salt stress.
Conclusion
Our findings indicate that the plant growth-promoting bacterium A. brasilense improves growth in salt-stressed rice. This opens the possibility of using this PGPB to mitigate salt stress in salt-sensitive crops. Our transcriptomic data suggest that A. brasilense improves rice growth under salt stress by regulating the expression of key genes involved in defense and stress response, abscisic acid and jasmonic acid signaling, and ion and nutrient transport. Our results also emphasize that genes in the auxin and ethylene signaling pathways are critical for the interaction between rice and A. brasilense under salt stress. In this study, we collected the transcriptomic data at seven days post-treatment. Future studies can identify gene expression changes at other time points. One limitation of the study is that it was performed under in-vitro conditions, which do not represent actual field conditions. Similar gene expression studies can be performed under field conditions in the future. Nevertheless, our findings provide important insights into salt stress tolerance and adaptation in rice by A. brasilense . Using alternative approaches like PGPB will play an important role in growing crops sustainably under stressful environmental conditions. | Conclusion
Our findings indicate that the plant growth-promoting bacterium A. brasilense improves growth in salt-stressed rice. This opens the possibility of using this PGPB to mitigate salt stress in salt-sensitive crops. Our transcriptomic data suggest that A. brasilense improves rice growth under salt stress by regulating the expression of key genes involved in defense and stress response, abscisic acid and jasmonic acid signaling, and ion and nutrient transport. Our results also emphasize that genes in the auxin and ethylene signaling pathways are critical for the interaction between rice and A. brasilense under salt stress. In this study, we collected the transcriptomic data at seven days post-treatment. Future studies can identify gene expression changes at other time points. One limitation of the study is that it was performed under in-vitro conditions, which do not represent actual field conditions. Similar gene expression studies can be performed under field conditions in the future. Nevertheless, our findings provide important insights into salt stress tolerance and adaptation in rice by A. brasilense . Using alternative approaches like PGPB will play an important role in growing crops sustainably under stressful environmental conditions. | Author contributions
Conceived, designed, and contributed reagents/materials/analysis tools: AM. Performed the experiments: ZD, SD, MG, SG, HP, and JC. Analyzed the data: ZD, SD, YR, GG, and AM. Wrote the paper: ZD, AM, YR, and GG. All authors contributed to the article and approved the submitted version.
Major food crops, such as rice and maize, display severe yield losses (30–50%) under salt stress. Furthermore, problems associated with soil salinity are anticipated to worsen due to climate change. Therefore, it is necessary to implement sustainable agricultural strategies, such as exploiting beneficial plant-microbe associations, for increased crop yields. Plants can develop associations with beneficial microbes, including arbuscular mycorrhiza and plant growth-promoting bacteria (PGPB). PGPB improve plant growth via multiple mechanisms, including protection against biotic and abiotic stresses. Azospirillum brasilense , one of the most studied PGPB, can mitigate salt stress in different crops. However, little is known about the molecular mechanisms by which A. brasilense mitigates salt stress. This study shows that total and root plant mass is improved in A. brasilense -inoculated rice plants compared to the uninoculated plants grown under high salt concentrations (100 mM and 200 mM NaCl). We observed this growth improvement at seven- and fourteen days post-treatment (dpt). Next, we used transcriptomic approaches and identified differentially expressed genes (DEGs) in rice roots when exposed to three treatments: 1) A. brasilense , 2) salt (200 mM NaCl), and 3) A. brasilense and salt (200 mM NaCl), at seven dpt. We identified 786 DEGs in the A. brasilense -treated plants, 4061 DEGs in the salt-stressed plants, and 1387 DEGs in the salt-stressed A. brasilense -treated plants. In the A. brasilense -treated plants, we identified DEGs involved in defense, hormone, and nutrient transport, among others. In the salt-stressed plants, we identified DEGs involved in abscisic acid and jasmonic acid signaling, antioxidant enzymes, sodium and potassium transport, and calcium signaling, among others. In the salt-stressed A. brasilense -treated plants, we identified some genes involved in salt stress response and tolerance (e.g., abscisic acid and jasmonic acid signaling, antioxidant enzymes, calcium signaling), and sodium and potassium transport differentially expressed, among others. We also identified some A. brasilense -specific plant DEGs, such as nitrate transporters and defense genes. Furthermore, our results suggest genes involved in auxin and ethylene signaling are likely to play an important role during these interactions. Overall, our transcriptomic data indicate that A. brasilense improves rice growth under salt stress by regulating the expression of key genes involved in defense and stress response, abscisic acid and jasmonic acid signaling, and ion and nutrient transport, among others. Our findings will provide essential insights into salt stress mitigation in rice by A. brasilense . | Supplementary Material | Acknowledgments
The authors wish to thank the USDA Dale Bumpers National Rice Research Center, Stuttgart, Arkansas, for providing rice seeds Oryza sativa (cv. Nipponbare). The authors also thank the reviewers for their comments and helpful suggestions.
Funding
The authors declare financial support was received for the research, authorship, and/or publication of this article. This publication was made possible by the Arkansas INBRE program, supported by a grant from the National Institute of General Medical Sciences, (NIGMS), P20 GM103429 from the National Institutes of Health.
Data availability statement
The raw sequence data is publicly available from the Sequence Read Archive (SRA) under the BioProject accession ID PRJNA962515. | CC BY | no | 2024-01-16 23:35:04 | Front Agron. 2023 Oct 4; 5:1216503 | oa_package/9a/fe/PMC10785826.tar.gz |
PMC10785958 | 38222464 | INTRODUCTION
TRANSCRANIAL focused ultrasound (tFUS) is a therapeutic modality successfully demonstrated for drug delivery via blood-brain barrier opening, studying the brain through neuromodulation, and liquefying clots through histotripsy [ 1 ]. tFUS is suitable for targeting cortical and deep regions in the brain while maintaining a small, ellipsoidal-shaped volume of concentrated energy on the millimeter scale. The focal size of tFUS transducers requires precise positioning of the transducer relative to the subject’s head, and accurate dosimetry is key to therapeutic outcomes. tFUS procedures have been performed in the magnetic resonance (MR) environment, where MR imaging can be used to assess the transducer’s position and localize the focus through direct measurements of the interaction of ultrasound and tissue such as MR thermometry and MR-acoustic radiation force imaging (MR-ARFI) [ 2 ]. A straightforward method to position a spherically curved transducer under MR-guidance is to collect an anatomical scan of the subject so that the transducer surface is visible in the image and the focus location is then estimated using the geometric properties of the transducer [ 3 ]. MR-guidance is beneficial for target localization and validation; however, tFUS procedures guided by MR imaging are limited to specific patient populations and facilities with access to MR scanners.
Positioning methods independent of the MR environment are often used to expand patient eligibility for tFUS procedures and reduce associated costs. Transducer positioning methods used outside the MR environment for tFUS procedures include patient-specific stereotactic frames, ultrasound image guidance, and optical tracking. Patient-specific stereotactic frames [ 4 ], [ 5 ], [ 6 ], [ 7 ] allow repeatable positioning of a transducer onto a subject’s head with sub-millimeter accuracy. Subject-specific frames require individualized design effort and may require invasive implants unsuitable for procedures with healthy human subjects. Ultrasound image guidance uses pulse-echo imaging during tFUS procedures to position a transducer relative to the skull, determining the distance from the skull using the receive elements of a transducer [ 5 ], [ 8 ], [ 9 ]. Ultrasound-guided experiments result in sub-millimeter spatial targeting error but require large, multi-element arrays and receive hardware that can be expensive. Optical tracking has been used to position transducers in a number of tFUS studies with animals [ 10 ], [ 11 ], [ 12 ], [ 13 ], [ 14 ] and humans [ 15 ], [ 16 ], [ 17 ], [ 18 ] where the focus location of a transducer is defined by a tracked tool, and the position and orientation of the tool are updated in real-time relative to a camera. Optical tracking is completely noninvasive for the subject but has larger targeting error than stereotactic frames and ultrasound image-guidance, with reported accuracy in the range of 1.9–5.5 mm [ 11 ], [ 19 ], [ 20 ], [ 21 ], [ 22 ], [ 23 ]. Partial contribution for the large targeting error from optical tracking may be due to the heterogeneous skull, known to shift and distort the focus, that is currently not encapsulated by the predicted focus location from optical tracking systems.
Compensating for the skull is a challenge because the transducer focus is difficult to predict after interacting with the heterogeneous skull layers. There can be significant differences in skull thickness and shape between subjects [ 24 ], thus a uniform correction for the skull is not optimal when working with a large population and a patient-specific approach is preferable. Acoustic properties such as density, speed of sound, and attenuation can be estimated for a single subject from computed tomography (CT) images of the skull [ 25 ], [ 26 ], [ 27 ], [ 28 ], pseudo-CTs generated directly from MR images [ 29 ], [ 30 ], or pseudo-CTs from trained neural networks [ 31 ], [ 32 ], [ 33 ], [ 34 ], [ 35 ]. The acoustic properties of the skull can then be input into acoustic solvers to simulate the resulting pressure field, temperature rise, or phase and amplitude compensation for a particular subject. There are a number of acoustic simulation tools available [ 36 ], [ 37 ], where the appropriate simulation method for an application involves a trade-off between simulation speed and accuracy. For transducer positioning methods outside the MR scanner, acoustic simulations have been included in studies to estimate in situ pressure, spatial extent, and heating [ 38 ]. Additionally, tFUS procedures guided by MR imaging could benefit from simulations to include subject-specific skull effects and compare the simulated focus with available MR localization tools. Although acoustic simulations are commonly integrated into the preplanning workflow or retrospective analysis of tFUS procedures [ 39 ], methods to position the transducer and subject in simulation space are not explicitly defined.
Here, we describe a software pipeline to generate patient-specific acoustic simulation grids informed by geometric transformations that are available during tFUS procedures guided by optical tracking. Our pipeline uses open-source tools that allow the method to be readily implemented. The method creates a transformation from the transducer to the simulated space with three MR scans that can then be repeated outside of the magnet. We demonstrate the software pipeline in neuronavigated experiments with standard tissue-mimicking phantom with and without ex vivo skull cap and compare the spatial locations of the simulated focus relative to a ground truth focus detected by MR-ARFI. We evaluate a method that updates the transducer location based on MR-ARFI so that the focus location more closely matches the MR ground truth spatially. By using transforms obtained from optical tracking, the software pipeline streamlines simulation and provides a patient-specific estimate of the in situ pressure. Validation studies with MR-ARFI revealed that although the simulated focus tracks closely with the one predicted by optical tracking, the error in actual focusing is defined by the accuracy limits of optical tracking, which can be compensated but require imaging feedback. | METHODS
PHANTOM CREATION
Agar-graphite phantoms were used for all phantom and ex vivo skull cap phantom experiments. Two custom 3D-printed transducer coupling cones designed in-house were used for each setup that better conformed with either the phantom mold or the skull cap, shown in Figure 1 . For the phantom setup, the cone was filled with an agar-only layer that consisted of cold water mixed with 1% weight by volume (w/v) food-grade agar powder (NOW Foods, Bloomingdale, IL, USA). The mixture was heated to a boil and once cooled, filled approximately 3/4 of the cone. To create the agar-graphite phantom, a beaker was filled with water mixed with 1% w/v agar powder and 4% w/v 400 grit graphite powder (Panadyne Inc, Montgomeryville, PA, USA). The beaker was heated in the microwave until the contents boiled, and the mixture was removed from heat and periodically stirred to prevent the graphite settling out of solution before the phantom set. Once cooled, the phantom mixture topped off the cone and filled the cylindrical acrylic phantom mold that was adhered to the opening of the transducer coupling cone.
For the ex vivo human skull setup, the transducer cone was filled with enough water so that the latex membrane created a flat surface that the skull cap rested on and was secured in place with Velcro straps. The skull cap was rehydrated and degassed in water 24 hours prior to the experiment. The inside of the skull cap was first coated with an agar-only layer detailed above to fill gaps around the sutures of the skull cap. The agar-graphite layer was created using the same methods for the cylindrical phantom and poured to fill the remainder of the skull cap.
NEURONAVIGATION
The optical tracking setup consisted of a Polaris Vicra camera (Northern Digital Inc. (NDI), Waterloo, Ontario, CAN), custom transducer and reference trackers, and an NDI stylus tool. The custom trackers were designed with four retroreflective spheres, as recommended by NDI’s Polaris Tool Guide, and printed in-house. The geometry of each tool was defined using NDI 6D Architect software. A tracked tool’s position and orientation was updated in real-time via the Plus toolkit [ 40 ], which streamed data from the optical tracking camera to 3D Slicer [ 41 ] and interfaced with the OpenIGTLink [ 42 ] module. The streamed data updated the transducer’s focus position and projected a point onto an image volume through a series of transformations to traverse coordinate systems associated with the image (I), physical (P), tracker (T), and ultrasound (U) spaces. Transformations between two coordinate systems are represented as B T A , or a transform from coordinate system A to coordinate system B.
The creation and calibration of required transformations have been established by previous work from our group and others [ 19 ], [ 20 ], [ 43 ]. Briefly, six doughnut-shaped fiducials (MM3002, IZI Medical Products, Owings Mills, Maryland, USA) with an outer diameter of 15 mm and an inner diameter of 4.5 mm were placed around the phantom mold or the transducer cone. The calibrated NDI stylus tool was placed in each fiducial to localize the points in physical space. The corresponding fiducials were collected in image space from a T 1 -weighted scan (FOV: 150 mm × 170 mm × 150 mm, voxel size: 0.39 mm × 0.50 mm × 0.39 mm, TE: 4.6 ms, TR: 9.9 ms, flip angle: 8°) acquired on a 3T human research MRI scanner (Ingenia Elition X, Philips Healthcare, Best, NLD) with a pair of loop receive coils (dStream Flex-S; Philips Healthcare, Best, NLD). The fiducials were registered with automatic point matching through SlicerIGT’s [ 44 ] Fiducial Registration Wizard and resulted in the physical-to-image space transformation ( I T P ). The root mean square error distance between the registered fiducials (i.e the Fiducial Registration Error) was recorded from the module [ 45 ].
Two custom trackers were used: a reference tracker as a global reference that allowed the camera to be repositioned as needed during experiments, and a transducer tracker. The transducer tracker’s position and orientation in physical space was defined relative to the reference tracker and reported by the NDI camera as a transformation matrix, P T T . A single-element, spherically curved transducer (radius of curvature = 63.2 mm, active diameter = 64 mm, H115MR, Sonic Concepts, Bothell, Washington, USA) was used for all experiments, operated at the third harmonic frequency of 802 kHz. The geometric focus of the transducer was calibrated relative to the transducer’s tracker by attaching a rod with an angled tip, machined so that the tip was at the center of the focus [ 19 ]. The rod was pivoted about a single point to create the transformation T T U , which is a translation of the transducer’s focus location from the transducer’s tracker. The focus location was visualized using a sphere model created with the ‘Create Models’ module in Slicer, where the radius was set by the expected full-width at half-maximum focal size of the transducer at 802 kHz. T T U of this transducer was validated in previous work [ 13 ] with MR thermometry, that included a bias correction of [X,Y,Z] = [2,0,4] mm where the z-direction is along the transducer’s axis of propagation.
SIMULATION PIPELINE
A new transformation, U T S , was required to add on to the transformation hierarchy and transform the simulation grid (S) to the ultrasound coordinate system shown in Fig 2a . First, a model of the transducer was created using the k-Wave function makeBowl, with a cube centered at the geometric focus location that assisted with visualization. The model was imported into Slicer as a NIFTI file and was manually translated so that the center of the model’s cube was aligned with the predicted focus from optical tracking shown in step 1 of Fig. 2b . Because the transducer surface was visible in the T 1 -weighted image due to the large signal from the water-filled cone, the transducer model was manually rotated until the model matched the orientation of the transducer surface from the MR image like in step 2 of Fig. 2b . The transducer model may be slightly offset from the transducer surface in the T 1 -weighted image, especially if a bias correction was applied in the previous transformation T T U , as was the case in Fig 2b . This calibration was performed with three separate T 1 -weighted images and the resultant transformations, a separate transformation for each rotation and translation, was averaged to create the final transformation, U T S . The creation of U T S only needs to be performed once and can be added to any scene coupled with the same transducer model, transducer, and transducer tracker.
The workflow to incorporate simulations using optical tracking data is detailed in Fig. 2c . U T S was added to the saved Slicer scene that contained the neuronavigation data from eight phantom experiments and three ex vivo skull cap phantom experiments. First, the transformation hierarchy from the saved scene was used to transform the transducer model into image space. Next, MR/CT volumes were resampled with the ‘Resample Image (BRAINS)’ Module so that the voxel and volume sizes matched the simulation grid. For skull phantom experiments, a CT image was acquired on a PET/CT scanner (Philips Vereos, Philips Healthcare, Best, NLD) with an X-ray voltage of 120 kVp and exposure of 300 mAs (pixel resolution: 0.30 mm and slice thickness: 0.67 mm) and was reconstructed with a bone filter (filter type ‘D’). The CT image was manually aligned to match the orientation of the T 1 -weighted MR volume (parameters described in II-B ) and then rigidly registered using the ‘General Registration (BRAINS)’ module in Slicer. The simulation grid containing the transducer model and resampled MR/CT volumes were saved as NIFTI file formats to use for simulations.
All simulations were performed using the MATLAB acoustic toolbox, k-Wave [ 46 ], with a simulation grid size of [288,288,384] and isotropic voxel size of 0.25 mm, where we maintained greater than 7 points per wavelength in water for simulation stability [ 47 ]. For phantom setups, the simulation grid was assigned acoustic properties of water selected from the literature [ 48 ] and shown in Table 1 with the exception of absorption. We previously measured the attenuation in agar-graphite phantoms as 0.6 dB/cm/MHz [ 20 ] and assumed absorption was a third of the attenuation value [ 48 ], [ 49 ]. Thus, all pixels in the agar-graphite layer of the phantom were assigned 0.2 dB/cm/MHz for α tissue . For simulations with the skull cap, the skull was extracted from the CT image using Otsu’s method [ 50 ], [ 51 ] and a linear mapping between Hounsfield units and bone porosity [ 25 ] was used to derive acoustic properties of the skull. A ‘‘tissue’’ mask was created from the agar-graphite phantom using Slicer’s ‘Segment Editor’ module and assigned as α tissue . For all simulations, the RMS pressure was recorded and the focus location was defined as the maximum pixel. For cases where the maximum pressure was at the skull, as previously observed in simulations with this transducer [ 52 ], the tissue mask was eroded using the ‘imerode’ function in MATLAB to exclude pixels closest to the skull. The eroded mask was applied to the simulated pressure field to select the maximum in situ pressure.
MR VALIDATION
The focus was localized using MR-acoustic radiation force imaging (MR-ARFI) described in prior work [ 52 ], measuring displacement induced by the transducer in both phantom and ex vivo skull cap setups. MR-ARFI was acquired using a 3D spin echo sequence with trapezoidal motion encoding gradients (MEG) of 8 ms duration and 40 mT/m gradient amplitude strength (voxel size: 2 mm × 2 mm × 4 mm, TE: 35 ms, TR: 500 ms, reconstructed to 1.04 mm × 1.04 mm × 4 mm). Only positive MEG polarity was required to measure sufficient displacement in phantoms. With the skull, two scans were acquired with opposite MEG polarity. A low duty-cycle ultrasound sonication was used for all ARFI scans (8.5 ms per 1000 ms/2 TRs, free-field pressure of 1.9 MPa for phantoms and 3.2 MPa for ex vivo skull cap).
ERROR METRICS
Metrics were chosen that assessed the targeting accuracy of the neuronavigation system and the spatial accuracy of the simulated pressure fields. First, the target registration error (TRE) of our neuronavigation system was determined from the Euclidean distance between the center of the predicted focus from optical tracking and the center of the MR-ARFI focus for all experiments, or TRE Opti,ARFI . It was assumed that the predicted focus from optical tracking was positioned at the desired target. The MR measured focus from the displacement image was manually selected by adjusting the visualization window and level tools of the volume in 3D Slicer to pick the approximate maximum pixel within the center of the focus. To compute the accuracy of the focus simulated based on optical tracking geometry, the distance between the center of the simulated focus and the optically tracked focus was calculated ( Error Opti,Sim ). The center of the simulated focus was chosen from the centroid of the volume using 3D Slicer’s ‘Segment Statistics’ module, thresholded at half of the maximum pressure to create a segmentation of the focus. Finally, we evaluated the simulated pressure field compared to our ground truth MR measurement ( Error Sim,ARFI ). For all error metrics, separate axial and lateral components were evaluated. The axial component was determined from the offset between the two foci along the transducer’s axis of propagation and the lateral component was the Euclidean norm of the X and Y directions. The focus locations were selected in 3D Slicer where volumes are automatically upsampled for visualization, thus the metrics are reported at higher precision than the simulation grid.
To achieve better agreement between experimental and simulated estimates of the focus, the distance vector between the predicted focus from optical tracking and the MR-ARFI focus was applied as a translation to the transducer model and re-simulated. The spatial error was quantified between the displacement image and the updated simulation results ( Error SimUpdated,ARFI ). | RESULTS
PHANTOM RESULTS
For all phantom data sets (N=8), the target registration error of the optical tracking system, or TRE Opti,ARFI , was 3.4 ± 1.0 mm. TRE Opti,ARFI and the separated axial and lateral error components are plotted in Figure 3a for each data set and the average for all data sets. The mean axial and lateral TRE Opti,ARFI was 1.5 ± 1.3 mm and 2.8 ± 1.3 mm, respectively. Similarly, Error Sim,ARFI is plotted in Figure 3b and describes the error between the simulated pressured field positioned from optical tracking data compared to the MR-ARFI focus location. Average Error Sim,ARFI was 3.7 ± 1.0 mm with 2.8±1.2 mm of lateral error and 1.8±1.4 mm of axial error. Figure 3c compared the error between the predicted and simulated foci, Error Opti,Sim , with a group average of 0.5±0.1 mm, with errors of 0.4±0.1 mm and 0.3±0.0 mm in the axial and lateral directions. Updated simulations from the distance vector correction were less than a millimeter, reducing the original simulation error of Error Sim,ARFI from 3.7 ± 1.0 to 0.5 ± 0.1 mm for Error SimUpdated,ARFI . Slices from phantom dataset #1 centered about the MR-ARFI focus are shown in Figure 4 to demonstrate the spatial differences between each volume where Figure 4a shows the targeting error, Figure 4b shows the simulation pipeline error, and Figure 4c shows the updated simulation results which better spatially match the MR-ARFI focus in Figure 4a .
EX VIVO SKULL PHANTOM RESULTS
The simulation pipeline was further evaluated with transmission through an ex vivo skull phantom at three transducer orientations. Mean TRE Opti,ARFI was 3.9 ± 0.7 mm with 2.2 ± 0.4 mm and 3.2 ± 0.7 mm of lateral and axial error, respectively as shown in 5a . Figure 5b shows Error Sim,ARFI had larger axial error and total error than TRE Opti,ARFI . Average Error Sim,ARFI was 4.6 ± 0.2 mm comprising of 4.2 ± 0.2 mm error axially and 1.8 ± 0.1 mm error laterally. The averaged Error Opti,Sim in 5c increased from 0.5 ± 0.1 mm in phantoms to 1.2±0.4 mm when incorporating the skull, with individual axial and lateral components of 1.1±0.5 mm and 0.5±0.2 mm for the skull phantom, respectively. The distance vector correction improved the simulated focus location from an Error Sim,ARFI of 4.6 ± 0.2 mm to Error SimUpdated,ARFI of 1.2 ± 0.4 mm. Similarly, axial and lateral errors of Error Sim,ARFI improved from 4.2±0.2 mm and 1.8±0.1 mm to Error SimUpdated,ARFI errors of 0.9 ± 0.8 mm and 0.4 ± 0.2 mm, respectively.
Figure 6a shows MR-ARFI displacement maps through an ex vivo skull cap from skull phantom data set #2. Enlarged views showed the volume used to calculate TRE Opti,ARFI in Figure 6b . An axial shift is noted between the predicted focus and simulated focus in Figure 6c and because slices are centered at the MR-ARFI focus location for comparison, we do not observe the center of the simulated focus location as Error Sim,ARFI is 4.3 mm for this example case. However, the updated simulation results from vector correction reduced the error to 1.4 mm in 6d .
The ground truth focus location from the MR-ARFI displacement map ([X,Y,Z] = [0,0,0]) is plotted against the simulated focus location before and after vector correction, where the improvement can be visualized in Figure 7 for both phantom and skull phantom data sets. The initially simulated foci show the error is not biased in a given direction compared to the MR-ARFI focus pltoted in Figure 7a . The simulation results demonstrate there is not a uniform correction that would improve TRE Opti,ARFI or Error Sim,ARFI . For the vector-corrected cases in Figure 7b , the remaining error further demonstrates the offset attributed to either absorption in phantoms or the medium properties due to the skull. | DISCUSSION
Neuronavigation using optical tracking provides a noninvasive approach to position a transducer about a subject’s head during tFUS procedures independent of the MR scanner for guidance. For transcranial applications, the aberrating skull displaces the focus from the intended target but because of subject skull variability, this offset is difficult to predict in real-time. Acoustic simulations can predict attributes of the focus after interacting with the skull, but methods to position the transducer in the simulated space are not explicitly defined or achieved using ad hoc methods. Here, we proposed a method that uses transformations from optical tracking to position a transducer model in simulations representative of the transducer position during tFUS procedures. Metrics were chosen to quantify the spatial error of the simulated focus compared with the optically tracked or MR-measured focus. A correction method to update the transducer model was proposed to address inherent errors of using optical tracking to set up the pipeline. This simulation pipeline can be a tool to accompany optically tracked tFUS procedures outside of the MR environment and provide subject-specific, in situ estimates of the simulated pressure fields.
We first assessed the accuracy of our simulation workflow informed by optical tracking data for neuronavigated FUS experiments. A low error between the predicted focus from optical tracking and the simulated focus ( Error Opti,Sim ) was observed in both phantom (0.5 ± 0.1 mm) and skull phantom data sets (1.2 ± 0.4 mm). The larger Error Opti,Sim with the skull comprised of 1.1 ± 0.5 mm of axial error which can be attributed to skull-specific effects captured in simulation. Although we did notice minor focal shifts with the presence of the ex vivo skull, other simulation studies of ex vivo and in situ scenarios indicate larger focal shifts may be expected depending on the skull characteristics, brain target, and transducer properties [ 11 ], [ 22 ], [ 53 ]. This study largely focused on the spatial error of the simulation workflow where we used previously validated acoustic parameters [ 48 ]. However, when considering this workflow for dosimetry estimates, importance should be placed on parameter selection, as a sensitivity analysis from Robertson et al. demonstrated the importance of accurately accounting for the speed of sound [ 54 ]. Coupled with Webb et al., noting the relationship of speed of sound and HU may vary between X-ray energy and reconstruction and velocity of the human skull [ 55 ].
Although low Error Opti,Sim is promising for reproducing neuronavigated tFUS setups in silico , our accuracy assessment comparing the simulated pressure field to our MR measurement ( Error Sim,ARFI ) demonstrates that the simulation grids are only as accurate as the error of the optical tracking system ( TRE Opti,ARFI ). In our specific case, the lowest target registration error used to define our optical tracking system error was 3.5 ± 0.7 while Error Sim,ARFI was either similar (3.7±1.0 mm in phantoms) or worse (4.6±0.2 mm in skull phantoms) than TRE Opti,ARFI . Improving the targeting accuracy of a system could include measures such as increasing the number of fiducials, optimizing fiducial placement, and reducing fiducial localization error [ 56 ], [ 57 ].
When translating the neuronavigation setup and simulation workflow used in this study for human subjects, there should be careful consideration regarding fiducial placement and transducer positioning. Fiducial placement should surround the target so that the centroid of the fiducial markers is as close to the target as possible [ 56 ], similar to the fiducial placement during the phantom scan experiments. However, spatially distributing fiducials around targets in the brain is not always feasible in practice. Previous neuronavigation experiments with human subjects identified skin blemishes or vein structures that can be reproducibly selected for multiple experiments [ 58 ] or placed near prominent features such as behind the ears and above the eyebrows [ 59 ]. Nonhuman primate experiments without MR guidance have created custom mounts to repeatably place fiducials on the subject’s head [ 13 ]. For nonhuman primates, Boroujeni et al. attached the fiducials to the headpost which may translate to a helmet design similar to the experimental setup of Lee et al. [ 58 ] or custom headgear of Kim et al. [ 59 ].
Because fiducial placement is limited for human subjects, we may expect a larger targeting error when translating the neuronavigation workflow for human studies. Additionally, the error may depend on the target in the brain. For example, it may be expected that deeper regions in the brain that are further from the centroid of the fiducial markers result in larger targeting errors [ 56 ]. While we do not know the exact angle of approach and target depths from Xu et al. in silico study, it was observed that targets located in the frontal and temporal lobes had larger axial offsets compared to the occipital and parietal lobes, observing cortical and deeper regions had larger errors due to the skull interfaces [ 22 ]. For acoustic simulations, pseudo CT images have been proposed as alternatives to acquiring CT images that expose human subjects to ionizing radiation [ 29 ], [ 30 ], [ 31 ], [ 32 ], [ 33 ], [ 34 ], [ 35 ], [ 60 ], [ 61 ]. We do not anticipate any technical challenges adapting pseudo CT images into the simulation workflow proposed in this study.
Existing optical tracking systems can have errors greater than the expected focal size of the transducer, which can result in targeting undesired regions of the brain. Thus, we developed a correction method to reduce the targeting accuracy and update the transducer model in simulation space so that the resulting pressure field spatially matches the MR-ARFI focus. For our work, we informed the vector correction using the center of the MR-ARFI focus, but this correction method can be explored with other imaging methods such as MR thermometry [ 62 ]. The updated simulation grids can be used for further analysis of tFUS procedures, as accurate placement of the simulation models will be useful when adapting the workflow for multi-element arrays to perform steered simulations. However, we note that the vector correction applied in this work assumes that the error can be updated solely by a translation, where rotational correction was not explored in the scope of this work.
The simulation workflow presented here is dependent on the initial calibration at the MR scanner to create the transformation required to add the simulation coordinate space to the existing optical tracking transformation hierarchy. This poses a limitation for transducers that are not MR-compatible that require adapting setups for calibration and validation of setups fully independent of the MR scanner. One proposed method could leverage previous methods from Chaplin et al. and Xu et al. to identify the transducer surface using an optically tracked hydrophone and assess the targeting accuracy in a water bath [ 20 ], [ 22 ]. Chaplin et al. described methods to produce an optically tracked beam map that could be expanded to localize the transducer surface using back projection [ 63 ] for simulations. The targeting accuracy of the optical tracking system could then be assessed using recent methods by Xu et al. using a water bath, hydrophone, and CT image, rather than MR localization methods. A fully independent method from the MR scanner becomes increasingly important when considering translation to human subjects where some MR tools such as MR-ARFI still require further development before implementation in a clinical setting. | CONCLUSION
In our study, we described a workflow to integrate acoustic simulations with optically tracked tFUS setups. Simulations from our pipeline were validated with MR measurements and found comparable results with the predicted focus from optical tracking. However, this pipeline is limited by the targeting accuracy of the optical tracking system. To improve estimates, we proposed a vector correction method informed by MR-ARFI to update the transducer model in the simulation grid, which resulted in the improved spatial representation of the ground truth focus. This pipeline can be applied to existing tFUS neuronavigation setups using two open-source tools, aiding in the estimation of in situ characteristics of the ultrasound pressure field when MR-guidance is unavailable. | Optical tracking is a real-time transducer positioning method for transcranial focused ultrasound (tFUS) procedures, but the predicted focus from optical tracking typically does not incorporate subject-specific skull information. Acoustic simulations can estimate the pressure field when propagating through the cranium but rely on accurately replicating the positioning of the transducer and skull in a simulated space. Here, we develop and characterize the accuracy of a workflow that creates simulation grids based on optical tracking information in a neuronavigated phantom with and without transmission through an ex vivo skull cap. The software pipeline could replicate the geometry of the tFUS procedure within the limits of the optical tracking system (transcranial target registration error (TRE): 3.9 ± 0.7 mm). The simulated focus and the free-field focus predicted by optical tracking had low Euclidean distance errors of 0.5±0.1 and 1.2±0.4 mm for phantom and skull cap, respectively, and some skull-specific effects were captured by the simulation. However, the TRE of simulation informed by optical tracking was 4.6±0.2 , which is as large or greater than the focal spot size used by many tFUS systems. By updating the position of the transducer using the original TRE offset, we reduced the simulated TRE to 1.1 ± 0.4 mm. Our study describes a software pipeline for treatment planning, evaluates its accuracy, and demonstrates an approach using MR-acoustic radiation force imaging as a method to improve dosimetry. Overall, our software pipeline helps estimate acoustic exposure, and our study highlights the need for image feedback to increase the accuracy of tFUS dosimetry.
INDEX TERMS | ACKNOWLEDGMENT
The funders had no role in study design, data collection and analysis, the decision to publish, or the preparation of this manuscript. All acoustic simulations were run on a Quadro P6000 GPU donated by NVIDIA Corporation. Code, example data sets, and tutorials are available at: https://github.com/mksigona/OptitrackSimPipeline
This work was supported in part by the National Institute of Mental Health under Grant R01MH123687 and in part by the National Institute of Neurological Disorders and Stroke under Grant 1UF1NS107666.
MICHELLE K. SIGONA (Graduate Student Member, IEEE) received the B.S. degree in biomedical engineering from Arizona State University, Tempe, AZ, USA, in 2017. She is currently pursuing the Ph.D. degree in biomedical engineering with Vanderbilt University, Nashville, TN, USA. Her research interests include acoustic simulations for transcranial-focused ultrasound applications.
THOMAS J. MANUEL (Member, IEEE) received the B.S. degree in biomedical engineering from Mississippi State University, Starkville, MS, USA, in 2017, and the Ph.D. degree in biomedical engineering from Vanderbilt University, Nashville, TN, USA, in 2023. He is starting a post-doctoral fellowship with Institut National de la Santé et de la Recherche Médicale, Paris, France, in the laboratory of Jean-Francois Aubry in March 2023. His current research interests include transcranial-focused ultrasound for blood-brain barrier opening and neuromodulation.
M. ANTHONY PHIPPS (Member, IEEE) received the B.S. degree in biophysics from Duke University, Durham, NC, USA, in 2012, the M.S. degree in biomedical sciences from East Carolina University, Greenville, NC, USA, in 2015, and the Ph.D. degree in chemical and physical biology from Vanderbilt University, Nashville, TN, USA, in 2021. He is currently a Post-Doctoral Fellow with the Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center. His research interests include image-guided transcranial focused ultrasound with applications including neuromodulation and blood-brain barrier opening.
KIANOUSH BANAIE BOROUJENI received the B.S. degree in electrical engineering from the University of Tehran, Tehran, Iran, in 2016, and the Ph.D. degree in neuroscience from Vanderbilt University, Nashville, TN, USA, in 2021. After completing a post-doctoral year at Vanderbilt University, he started a post-doctoral research with Princeton University, Princeton, NJ, USA, in 2023. He is working on neural information routing between brain areas during flexible behaviors.
ROBERT LOUIE TREUTING received the B.S. degree in applied mathematics from the Spring Hill College, Mobile, AL, USA, in 2016. He is currently pursuing the Ph.D. degree in biomedical engineering with Vanderbilt University, Nashville, TN, USA. His research interests include cognitive flexibility and close-loop stimulation.
THILO WOMELSDORF is currently a Professor with the Department of Psychology and the Department of Biomedical Engineering, Vanderbilt University, where he leads the Attention Circuits Control Laboratory. His research investigates how neural circuits learn and control attentional allocation in non-human primates and humans. Before arriving with Vanderbilt University, he led the Systems Neuroscience Laboratory, York University, Toronto, receiving in 2017 the prestigious E. W. R. Steacie Memorial Fellowship for his work bridging the cell- and network-levels of understanding how brain activity dynamics relate to behavior.
CHARLES F. CASKEY (Member, IEEE) received the Ph.D. degree from the University of California at Davis in 2008, for studies about the bioeffects of ultrasound during microbubble-enhanced drug delivery under Dr. Katherine Ferrara. He has been working in the field of ultrasound since 2004. Since 2013, he has been leading the Laboratory for Acoustic Therapy and Imaging, Vanderbilt University Institute of Imaging Science, which focuses on developing new uses for ultrasound, including neuromodulation, drug delivery, and biological effects of sound on cells. In 2018, he received the Fred Lizzi Early Career Award from the International Society of Therapeutic Ultrasound. | CC BY | no | 2024-01-16 23:35:04 | IEEE Open J Ultrason Ferroelectr Freq Control. 2023 Sep 25; 3:146-156 | oa_package/c5/38/PMC10785958.tar.gz |
|
PMC10786321 | 38222444 | INTRODUCTION
Midportion Achilles tendinopathy (AT) is an overuse injury of insidious onset resulting in pain and stiffness in the midportion of the Achilles tendon. Although commonly attributed to deficient plantar flexor muscle function ( 1 , 2 ), there is limited empirical evidence that supports this assumption, particularly compared with healthy controls. Instead, the premise of weak or fatigable plantar flexors in AT is largely based on two factors: 1) onset commonly accompanies a sudden increase in activity, implying that insufficient strength was a predisposing factor ( 2 , 3 ), and 2) strength gains often—but not always ( 4 )—accompany improvements in symptoms and function ( 5 ).
According to a recent systematic review ( 6 ), the state of plantar flexor muscle function in AT is inconclusive; of six studies comparing AT with controls, three reported no difference ( 7 – 9 ), one reported impaired muscle function in AT ( 10 ), and two had mixed results within their respective cohorts ( 11 , 12 ). No study simultaneously evaluated isometric, concentric, and eccentric strength or the contractility of or ability to activate the plantar flexor muscles, limiting the ability to draw conclusions regarding maximal strength, mode of contraction, and contributing mechanisms. Despite existing data, the presumption of weakness in AT remains, as do recommendations for strengthening. However, strengthening interventions do not consistently resolve symptoms and functional limitations in AT ( 2 , 6 ). If strength is deficient, strengthening interventions would be expected to resolve the condition more reliably and consistently.
Weakness associated with impaired function, which might occur in AT, could be due to inadequate neural drive (i.e., the ability of the central nervous system to activate a muscle), muscle atrophy, or inability of the available muscle to contract. Neural drive and muscular contractility can be estimated using electrical stimulation ( 13 ). Voluntary activation (VA), a measure of neural drive, evaluates the ratio of torque generated during a maximal effort contraction to the torque generated when a muscle, motor nerve, or motor cortex is stimulated. The torque produced in response to electrical stimulation at rest, known as a resting twitch (RT), provides information about the force-generating capacity of the muscle or contractility, independent of nervous system activation. Understanding contributions of neural drive and muscular contractility to neuromuscular function in AT may help explain why symptoms are often unresolved with strengthening interventions alone.
Pain is another factor that can limit optimal muscle function. Activation of group III and IV motor afferent nerves, for example, can result in inhibition of motoneurons and reduce VA ( 14 ). Such somatosensory feedback is likely imperative to protect exercising muscle ( 15 ). However, in chronic pain populations, such inhibition could result in muscle atrophy and persistent abnormalities in neuromuscular function ( 16 – 18 ).
The purpose of this study was to determine the following in persons with AT: 1) maximal plantar flexion strength and power during isometric and dynamic contractions; 2) neural drive during maximal effort contractions and contractile function during electrically evoked contractions at rest; and 3) the contributions of pain, neural drive, and contractile mechanisms to maximal strength. Based on prevailing assumptions and possible interactions with pain, we hypothesized that people with AT would be weaker than controls because of inadequate neural drive. | METHODS
Participants
Twenty-eight volunteers participated in the study: 14 with AT and 14 controls ( Table 1 ). AT inclusion criteria were history of gradual, insidious onset of pain and/or stiffness at the midportion of the Achilles tendon, which had become chronic (i.e., persisted for at least 3 months); pain reproduction with palpation of the midportion of the tendon; a positive Arc Sign; and a positive Royal London Hospital Test ( 2 ). Clinical assessments were completed as previously described ( 19 ). Controls were without history of either pain or stiffness in the Achilles tendon region. Exclusion criteria included diabetes ( 20 ); thyroid disorders ( 21 ); cardiovascular disease; neurological disease; known contraindications to exercise; and any acute injury, bursitis, insertional tendinopathy, or osteoarthritis in either lower extremity. Informed consent was obtained from all participants before participation in the study. The study protocol was approved by the Marquette University Institutional Review Board (HR-1801021327) and in compliance with the Declaration of Helsinki.
The study involved measures of maximal strength, pressure-pain thresholds (PPTs), VA, and RT of the plantar flexor muscles while seated in a Biodex TM dynamometer (Biodex System 3 Pro; Biodex Medical, Shirley, NY, USA). The following muscles have a moment arm and fiber direction that enables contribution to ankle plantar flexion: medial and lateral gastrocnemii (MG and LG, respectively), soleus (SOL), fibularis longus and brevis, posterior tibialis, flexor digitorum longus, and flexor hallucis longus. However, it is not possible to parcel out torque contributions of individual agonist muscles from the total plantar flexion torque measured during dynamometry. Thus, this study refers to “plantar flexors” rather than attributing maximal strength to any individual muscle or muscles. All testing sessions were led by the same researcher (L.K.S.). Testing took place during two sessions, and all tests were completed for both lower extremities. The order of limbs was randomized every session. Within sessions, testing was completedon one leg before performing identical testing of the contralateral limb.
The first session involved 1) familiarization to plantar flexor strength testing in the dynamometer, electrical stimulation of the tibial nerve (for VA and RT), and PPTs using a Somedic algometer (Somedic SenseLab AB, Sösdala, Sweden); 2) clinical measurements; and 3) completion of self-report outcome measures, including the Tampa Scale of Kinesiophobia ( 22 ), the Foot and Ankle Ability Measure (FAAM) ( 23 ), and the Victoria Institute of Sport Assessment—Achilles Questionnaire (VISA-A) ( 24 ), which evaluate fear of pain or injury, symptoms or difficulty with several functional tasks and activities of daily living, and the effect of AT on participation in activities. The second session involved evaluation of isometric MVCs with electrical stimulation, concentric and eccentric MVCs, baseline PPTs, and body composition measurements (using dual x-ray absorptiometry). Muscle activity of the triceps surae and tibialis anterior (TA) muscles was measured using surface electromyography (EMG). Physical activity data were calculated based on responses to the 12-month, self-report Modifiable Activity Questionnaire ( 25 ).
Experimental Setup
Each participant was seated in the dynamometer with a straight knee (0° flexion) and a trunk angle of 55° (to minimize hamstring discomfort and sciatic nerve tension during testing). The thigh rested on a padded thigh support, and the foot rested on a foot plate affixed to the dynamometer ( Fig. 1 ). To minimize extraneous movement and isolate exercise to the ankle joint, straps were placed across the waist, chest, and thigh. Two straps were secured around the ankle and one around the forefoot to maintain the plantar surface of the foot in contact with the foot plate. For PPT measurements, the ankle straps were removed to access the Achilles tendon, and the ankle was placed in a neutral position (i.e., foot perpendicular to shank).
To assess muscle activity in response to electrical stimulation, EMG electrodes were placed on the SOL, MG, LG, and TA muscles of bilateral lower extremities in a bipolar configuration (Ag–AgCl, 8-mm diameter; 20-mm interelectrode distance; Natus Medical Inc., Middleton, WI, USA) in accordance with recommendations from the Surface Electromyography for the Non-Invasive Assessment of Muscles project ( 26 ). Data were amplified (4000 Hz; Coulbourn Instruments, Allentown, PA, USA), digitized, and stored online (Power1401, Spike2 software; Cambridge Electronic Design Limited, Cambridge, UK).
Pressure-Pain Thresholds
PPTs were measured with the participant seated in the dynamometer using a 1-cm 2 algometer probe tip aligned perpendicular to the tissue being tested and at an application rate of 30 kPa·s −1 ( figure, Supplemental Content 1 , http://links.lww.com/EM9/A15 ). Algometry is a reliable method for evaluating PPTs within and between sessions ( 27 , 28 ). PPT was defined as the minimum pressure required to induce pain ( 28 ). Evaluation of PPTs local and remote to injured tissue provides valuable information about peripheral and central mechanisms, respectively, and is recommended for inclusion in tendinopathy populations ( 29 , 30 ).
PPT familiarization included emphasis that the measurement was not a test of pain tolerance but rather of the point at which the pressure sensation was first perceived as painful. Measurements were completed at three sites: 1) the Achilles tendon, using the pinch handle 4 cm proximal to the calcaneal insertion; 2) the ipsilateral MG, midway between lateral and medial margins at the location of largest calf girth; and 3) the upper trapezius, along its superior margin halfway between the seventh cervical vertebrae and the acromion process (see “x” marks in the figure, Supplemental Content 1 , http://links.lww.com/EM9/A15 ). Participants pressed a button when the pressure sensation was first perceived as painful. Two measurements were completed at each measurement site, then averaged in subsequent analyses. Measurement order was randomized for each leg, participant, and session.
Isometric Strength, VA, and Contractile Properties
Isometric strength was assessed using isometric maximal voluntary contractions (MVCs), VA was assessed with the interpolated twitch technique, and RT was evaluated in response to electrical stimulation of the resting muscle in a potentiated state. Torques produced during doublet stimulation were used in analyses of VA and RT.
To ensure supramaximal stimulation, isometric MVCs with electrical stimulation were assessed after determining optimal stimulation levels ( 31 ). In brief, the ipsilateral tibial nerve was stimulated with a bar electrode and a constant-current, variable high-voltage stimulator (DS7AH; Digitimer Ltd, Hertforshire, UK). Stimulation was applied at the medial popliteal space distal to the sciatic nerve bifurcation.Single square-wave pulses (400 V, 100 μs duration) were delivered with a stimulation intensity initiated at 50 mA and gradually increased until torque and EMG responses were optimized. The intensity was increased an additional 10% to ensure supramaximal stimulation during RT and VA.
To determine VA, the triceps surae muscles were electrically stimulated during an isometric MVC and then immediately after the MVC (~2 s after), during the muscles’ potentiated state. VA was calculated as the ratio of torque produced in response to doublet stimulation during isometric MVC (known as a superimposed twitch; SIT) to the torque produced in response to doublet stimulation at rest, while in its potentiated state (RT) ( 13 ): VA = 100 × (1 − SIT/RT). Immediately after resting doublet stimulation, a single-pulse stimulation was completed to evaluate compound muscle action potentials. Doublet stimulations were used for calculating VA, because they have been shown to be more sensitive than single stimulations ( 32 ). Participants completed four isometric MVCs, each separated by 2 min of rest.
The following contractile properties from the singlet stimulation were evaluated: rates of torque development and relaxation; maximum peak-to-peak compound muscle action potentials (Mmax; a measure of muscle activity in response to electrical stimulation) as measured using EMG of the triceps surae muscles; and electromechanical delay, defined as the time between onset of EMG activity and plantar flexion torque ( 33 ).
Dynamic Strength
Maximal dynamic strength was assessed using concentric and eccentric MVCs at eight velocities, in increments of 30 from −90 to 150 deg·s −1 (negative velocities indicate eccentric activation). Velocities faster than 150 and −90 deg·s −1 were not utilized; during pilot testing, most healthy participants could not reach target velocities greater than 150 deg·s −1 within their available ankle range of motion, and eccentric torques plateaued by −90 deg·s −1 .
Concentric and eccentric MVCs were completed using each participant’s available range of motion. Familiarization included four repetitions of both concentric and eccentric MVCs at 60 deg·s −1 .Test contractions included four repetitions of maximal-effort contractions at each of the eight velocities, whose order was randomized per limb and participant. Participants rested 1 min between each set of MVCs. Strong verbal encouragement and visual feedback were provided for every MVC in the study, including familiarization.
Data and Statistical Analyses
The best of four torques at each testing velocity were used for analyses. Isometric MVC was measured as the average of a 0.5-s window surrounding the maximum torque but preceding the superimposed electrical stimulation. Root-mean-squared EMG was recorded during the same 0.5-s window. Concentric and eccentric MVCs were measured as the instantaneous peak torque during dynamic contractions. Peak power was calculated as the product of instantaneous peak torque and velocity at time of peak torque. Rates of torque development and relaxation were normalized to isometric MVC torque. Mmax was normalized to the root-mean-squared EMG recorded during the 0.5-s window corresponding to peak isometric MVC torque.
Independent samples t -tests were used to compare physical characteristics, physical activity levels, and kinesiophobia between groups. Multivariate analyses of covariance (ANCOVAs) were used to evaluate between-group effects of AT diagnosis on strength, VA, and RT, with covariates of biological sex and age. Mediation analyses were performed to evaluate the effects of AT on PPT and of PPT on strength, VA, and RT. Repeated-measures ANCOVAs were used to compare limbs in persons with AT using a similar approach: outcome variables of strength, VA, and RT, and mediation analysis using PPTs.
Regression analyses were completed to assess the contributions of RT, VA, and pain to isometric MVCs. Normality was assumed for all variables based on histograms and Q-Q plots. A priori significance was set to P <0.05. Data are reported as mean ± standard deviation (SD) in the text and table and displayed as mean ± standard error of the mean (SEM) in the figures. Analyses were performed using the Statistical Package for Social Sciences (SPSS, V26; IBM Corp., Armonk, NY, USA). | RESULTS
Of the AT participants, half had bilateral AT. Thus, for within-group analyses, limbs were divided into more and less affected limbs and referred to, for simplicity, as “AT” and “AT–Control.” AT symptom duration ranged from 3 to 240 months. Limbs were matched for dominance when comparing AT and controls. Despite efforts to match based on age and sex, the AT group was older and had a greater proportion of male participants than the control group ( Table 1 ). Consequently, age and sex were used as covariates in statistical analyses. Torque data were normalized to body weight.
Based on FAAM and VISA-A, pain and disability were greater in AT than control or AT–Control limbs ( Table 1 ). Kinesiophobia did not differ between groups. Maximal AT pain occurred 2.9–5.4 cm proximal to the calcaneal notch.
Isometric MVC plantar flexion torque was not different between AT and controls when controlling for biological sex and age ( P = 0.89; Table 1 ), including when normalized to body weight ( P = 0.21; figure, Supplemental Content 2 , http://links.lww.com/EM9/A16 ). Likewise, isometric torque was not different between AT and AT–Control limbs ( P = 0.19).
Concentric and eccentric MVC torque and power were not different between groups at any velocity when controlling for age and sex, whether analyzed collectively across velocities (torque: P = 0.99; power: P =0.98) or at individual velocities ( Fig. 2 ). There was a main effect of velocity for torque ( P = 0.001) and power ( P = 0.04). When comparing AT and AT–Control limbs, there were no differences in concentric and eccentric MVC torque ( P = 0.88) or power ( P = 0.40). When analyzed at individual velocities, the findings were similar: maximal dynamic (concentric and eccentric) plantar flexor strength was not different between AT limbs.
When controlling forage and sex, RT—a measure of contractile function—was similar in AT and controls ( P = 0.07; Table 1 ). RT was not different between AT limbs ( P = 0.37). There were no differences in VA between groups, controlling for age and sex ( P = 0.53), or between limbs ( P = 0.30; Table 1 ).
Consistent with doublet RT, there were no between-group differences in singlet RT ( P = 0.41) rates of torque development ( P = 0.92) or relaxation ( P = 0.94), contraction time ( P = 0.12), or half-relaxation time ( P = 0.99). There were no AT between-limb differences in these variables (rate of torque development: P = 0.23; rate of torque relaxation: P = 0.22; contraction time: P = 0.86; half-relaxation time: P = 0.99; figure, Supplemental Content 3 , http://links.lww.com/EM9/A17 ). Mmax was not different between groups (LG: P = 0.70, MG: P = 0.60; SOL: P = 0.95; TA: P = 0.51) or between limbs for any muscles (LG: P = 0.83, MG: P = 0.67; SOL: P = 0.64; TA: P = 0.79). There were no differences in electromechanical delay between groups (AT: 9.9 (2.7) ms, control: 10.9 (3.0) ms; P = 0.38) or between limbs (AT–Control: 9.5 (4.7) ms, P = 0.76). Current amplitudes did not differ between groups (AT: 310.6 (152.2) mA, control: 272.6 (99.1) mA; P = 0.44).
Upper trapezius PPTs were higher in AT than controls (AT:327.86 (184.60) kPa, control: 189.82 (73.31) kPa; P = 0.02; Fig. 3 ). This difference remained significant adjusting for age ( P = 0.02), sex ( P = 0.03), or physical activity ( P = 0.05). However, it was not significant when adjusting for body weight alone ( P = 0.20) or for the combined variables of physical activity, age, sex, and body weight ( P = 0.39). There were no differences in calf (AT: 258.86 (156.51) kPa, control: 197.14 (109.53) kPa; P = 0.24) or Achilles tendon PPTs (AT: 268.18 (136.78) kPa, control: 229.86 (97.26) kPa; P = 0.40; Fig. 3 ), even when controlling for sex, age, body weight, and physical activity (calf, P = 0.93; Achilles tendon, P = 0.65). AT and AT–Control limbs were not different at the upper trapezius (AT–Control: 334.25 (181.44), P = 0.93), calf (AT–Control: 285.21 (158.22), P = 0.66), or Achilles tendon (AT–Control: 300.82 (142.74), P = 0.54).
Predictors for isometric MVC strength included in the regression analysis were RT, VA, and upper trapezius PPT. Because Achilles tendon and calf PPTs did not differ between groups, only upper trapezius PPTs were included in this analysis. In controls, the set of predictors significantly contributed to isometric MVC (F( 3 , 10 ) = 3.93, P = 0.04). However, when controlling for other predictors, VA was the only predictor of isometric MVC in controls ( t ( 10 ) = 2.70, P = 0.02), and the proportion of variance uniquely explained by VA ( ) was 0.33. The three predictors also significantly contributed to isometric MVC in AT ( F ( 3 , 10 ) = 9.11, P = 0.003). However, when controlling for other predictors, only RT significantly predicted isometric MVC in AT ( t ( 10 ) = 4.65, P = 0.001; ). In AT–Control, the predictors significantly contributed to isometric MVC ( F ( 3 , 10 ) = 7.29, P = 0.007). When controlling for the other predictors, both VA ( t ( 10 ) = 2.29, P = 0.045; ) and RT ( t ( 10 ) = 2.98, P = 0.01; ) were significant predictors of MVC torque in AT–Controls. | DISCUSSION
This study is unique in that it evaluates neural and muscular contributions to maximal strength in AT. Despite similar plantar flexor strength, the contributions differed between AT and controls. In AT, maximal isometric torque was associated with RT, indicating contractile function predicted strength. In controls, VA (neural drive) was the largest predictor of maximal isometric torque. Systemic pain perception also differed: upper trapezius PPTs were elevated in AT.
There were no differences in maximal plantar flexion strength between AT and controls or between AT limbs. Although strength differences are commonly assumed in tendinopathy ( 1 ), a recent systematic review found conflicting evidence for impaired plantar flexor muscle strength in people with AT compared with controls ( 6 ). We showed that plantar flexor strength and power were not impaired at any velocity of contraction in people with AT.
RT is largely representative of contractile (muscular) mechanisms. However, the impact of tendon properties on electrically evoked torque amplitudes and differences in tendon properties between controls and those with AT may explain why maximal plantar flexor torque was best predicted by RT in AT ( 33 ). Furthermore, electrical stimulation intensities do not explain why neuromuscular mechanisms (i.e., VA and RT) contributed differently to maximal isometric strength.
That electromechanical delay was also not different between groups simply suggests that the onset of torque production is similar. The ultimate difference in RT could be the result of a reduced modulus of elasticity: with a smaller slope in the stress–strain relationship, a larger amount of tissue deformation would be required to produce similar levels of stress. These findings suggest that musculotendinous slowing may be responsible for differences in plantar flexor function in AT.
VA was the best predictor of isometric MVC torque in controls, echoing previous research evaluating plantar flexor function in young, healthy adults ( 31 ). However, VA had a lesser influence on isometric MVC torque with greater symptom severity: the proportional variance in isometric MVC uniquely explained by VA decreased from 0.33 in controls to 0.28 in AT–Controls to 0.01 in AT. In contrast, RT was the best predictor of isometric MVC torque in the more affected AT limb. RT played an increasingly prominent role in predicting isometric MVC as symptom severity increased: the proportional variance in isometric MVC that was uniquely explained by RT increased from 0.02 in controls to 0.28 in AT–Controls to 0.58 in AT.
Justas differences in RT are likely not explained by plantar flexor contractility, contractile mechanisms probably do not underlie the relationships between RT and MVC. If reductions in RT are indeed explained by impairments in tendon properties in AT, the impaired tendon properties could lessen the role of neural drive in predicting maximal strength and instead become the predominant predictor of maximal strength in AT.
Persons with AT were less sensitive to pain than controls in a region of the body not involved in the AT injury (i.e., the upper trapezius). These findings suggest a difference in systemic pain modulation. However, greater upper trapezius PPTs could reflect increased body weight and increased male participants in the AT group rather than meaningful differences in PPTs between AT and controls.
Previous research on pain sensitivity in AT is mixed. People with AT have been shown to have reduced PPTs (higher sensitivity), similar PPTs, and increased PPTs (lower sensitivity) when compared with controls ( 34 – 37 ). None of these studies limited their test group to midportion AT; they included insertional AT ( 34 – 37 ), persons with lateral ankle pain and heel pain ( 35 ), and persons with and without a history of tendon pain ( 37 ). The heterogeneity of findings suggests that AT may be too multifactorial to simplify into central versus peripheral processes.
The pain findings from PPTs, self-report measures, and clinical measures may seem conflicting at first glance; there were no differences in local PPTs (measured at the Achilles tendon), but there were group differences in self-reported pain and stiffness per VISA-A and FAAM, and palpation of the tendon resulted in pain reproduction in patients with AT. However, these outcome measures do not include “pain with palpation” among their list items. Instead, they include pain and stiffness in response to tendon loading activities, such as standing, walking, stair climbing, and participation in sport ( 23 , 24 ). Although pain is often reproducible with palpation of the tendon in AT, pressure applied to the tendon is not related to the mechanism of injury—the reason patients seek intervention—or the typical aggravating factors cited by patients with AT ( 3 ), and does not consistently correlate with symptoms ( 27 ). In addition, most control patients in this study reported pain during tendon palpation. This was unsurprising because even healthy tendons can be sensitive to pressure ( 3 ), yet it was neither a reproduction of previously experienced pain, nor did it accompany other positive findings suggestive of AT. Finally, PPTs were measured at a standardized site, rather than at the point of maximal tenderness to palpation, due to setup constraints from the Biodex footplate. Although these locations were similar (PPTs were measured at 4 cm, and average maximal pain was between 2.9 and 5.4 cm proximal), any variations from the point of maximal tenderness could have contributed to lesser sensitivity at the Achilles tendon than has been found in other studies.
Translational Significance
Considering the magnitudes of muscle force observed in this study, resisted plantar flexion exercises using resistance bands are likely insufficient for engaging the Achilles tendon in AT. Considering an average internal moment arm of 5.2 cm at the talocrural joint, and an average MVC torque of 130.5 N·m, the average plantar flexor muscle force during isometric MVCs was approximately 2510 N. In contrast, the maximum forces produced by the stiffest therapeutic resistance bands at a 250% stretch range from 50 to 80 N ( 38 ). Even the stiffest resistance bands (80 N force) at this high level of prestretch would provide, at most, a force equivalent to 3.2% of isometric MVC force.
Rather than traditional muscle strengthening, the results of this study support approaching AT rehabilitation via a tendon loading paradigm. Exercise progression and dosage for muscle strengthening are well defined and relate to adaptations that occur within the muscle and nervous system ( 39 , 40 ). In contrast, parameters needed for an effective tendon-loading protocol in AT are largely undefined. Future research should focus on addressing this gap.
Limitations
Neural and muscular mechanisms were investigated during isometric contractions only because the interpolated twitch technique is not well validated during dynamic contractions ( 41 ). Neural drive and muscular mechanisms may not behave identically during dynamic and isometric contractions. Finally, this study cannot attribute cause and effect to the differences in systemic pain modulation (elevated upper trapezius PPT) seen in persons with AT. Further research—and alternative study designs—would be required to establish causation.
Conclusions
Plantar flexor strength and power differences cannot be assumed in AT, whether evaluated isometrically, concentrically, or eccentrically. Pain sensitivity quantified using algometry also cannot be assumed in AT. Despite similar magnitudes of RT and VA, their relationships to MVC differed between AT and controls. These findings suggest that prognosis in AT may not depend upon making gains in plantar flexor strength or power, a finding echoed by a recent systematic review ( 6 ). Instead, symptom resolution may be contingent upon normalizing function of the musculotendinous unit. | Conclusions
Plantar flexor strength and power differences cannot be assumed in AT, whether evaluated isometrically, concentrically, or eccentrically. Pain sensitivity quantified using algometry also cannot be assumed in AT. Despite similar magnitudes of RT and VA, their relationships to MVC differed between AT and controls. These findings suggest that prognosis in AT may not depend upon making gains in plantar flexor strength or power, a finding echoed by a recent systematic review ( 6 ). Instead, symptom resolution may be contingent upon normalizing function of the musculotendinous unit. | Introduction/Purpose:
The purpose of this study was to determine the following in persons with midportion Achilles tendinopathy (AT): 1) maximal strength and power; 2) neural drive during maximal contractions and contractile function during electrically evoked resting contractions; and 3) whether pain, neural drive, and contractile mechanisms contribute to differences in maximal strength.
Methods:
Twenty-eight volunteers (14 AT, 14 controls) completed isometric, concentric, and eccentric maximal voluntary contractions (MVCs) of the plantar flexors in a Biodex TM dynamometer. Supramaximal electrical stimulation of the tibial nerve was performed to quantify neural drive and contractile properties of the plantar flexors. Pain sensitivity was quantified as the pressure-pain thresholds of the Achilles tendon, medial gastrocnemius, and upper trapezius.
Results:
There were no differences in plantar flexion strength or power between AT and controls (isometric MVC: P = 0.95; dynamic MVC: P = 0.99; power: P = 0.98), nor were there differences in neural drive and contractile function ( P = 0.55 and P = 0.06, respectively). However, the mechanisms predicting maximal strength differed between groups: neural drive predicted maximal strength in controls ( P = 0.02) and contractile function predicted maximal strength in AT ( P = 0.001). Although pain did not mediate these relationships (i.e., between maximal strength and its contributing mechanisms), pressure-pain thresholds at the upper trapezius were higher in AT ( P = 0.02), despite being similar at the calf ( P = 0.24) and Achilles tendon ( P = 0.40).
Conclusions:
There were no deficits in plantar flexion strength or power in persons with AT, whether evaluated isometrically, concentrically, or eccentrically. However, the mechanisms predicting maximal plantar flexor strength differed between groups, and systemic pain sensitivity was diminished in AT. | Supplementary Material | ACKNOWLEDGMENTS
We are incredibly grateful to the following people who contributed to data collection and processing for the study: Karis, Meggie, Monica, Michael, and Elizabeth.
These contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health, the Medical College of Wisconsin, or Marquette University. The results of the current study do not constitute endorsement by the American College of Sports Medicine. The results of the study are presented clearly, honestly, and without fabrication, falsification, or inappropriate data manipulation.
DATA AVAILABILITY
The data sets generated and/or analyzed during the current study are available from the corresponding author upon reasonable request. | CC BY | no | 2024-01-16 23:35:04 | Exerc Sport Mov. 2023 Oct 26 Fall; 1(4):1-7 | oa_package/f3/61/PMC10786321.tar.gz |
PMC10786409 | 38222460 | Introduction
Community-Based Participatory Research (CBPR) is an effective approach for addressing health disparities by bridging the gap between research and action ( N. B. Wallerstein & Duran, 2006 ). In CBPR, knowledge is collaboratively produced and owned by a diverse group of stakeholders, including local communities ( Israel et al., 1998 ). Throughout the research process, non-academic stakeholders are empowered and engaged, promoting shared decision-making and co-learning among all involved ( Paradiso de Sayu & Chanmugam, 2016 ; Ross et al., 2010 ). By involving community members and other stakeholders in the research process, CBPR helps to ensure that research findings are relevant, applicable, and meaningful to the communities, and that the research leads to positive changes that benefit local communities ( Skizim et al., 2017 ; N. Wallerstein & Duran, 2010 ).
Research partnerships in CBPR are not about transforming community partners into academic researchers, or vice versa. Instead, successful CBPR projects require negotiation and compromise through dialogue and trust-based relationships. Building a strong and trusting partnership between academic researchers and members of the community is crucial, with clarity around each party’s role and how they could benefit from the relationship ( Andrews et al., 2013 ). Underserved communities have unique knowledge and connections to offer, including first-hand experience with health and social issues affecting their communities and the history of actions and solutions adopted to address them. While university researchers may have methodological expertise, analytical skills, and access to research funding, their limited understanding of insider perspectives on community problems can be a challenge. In some cases, researchers lack the first-hand/lived experience needed to truly understand challenges in the communities they are expected to serve. Therefore, creating a collaborative partnership between academic researchers and community members can bring valuable insights that can lead to more relevant and impactful research ( Martinez et al., 2013 ).
Despite the increasing demand, the potential of CBPR to address health disparities remains unrealized, as the traditional university-controlled approach to research remains the norm ( Coombe et al., 2023 ). CBPR projects take longer than traditional studies. The effectiveness and impact of CBPR projects is greater when academic and community partners spend ample time learning from one another, developing agreeable plans, and nurturing productive and trusting partnerships ( Jagosh et al., 2015 ; Ross et al., 2010 ; N. Wallerstein & Duran, 2010 ). Building a strong research partnership is challenging, and various roadblocks can hinder progress. Sharing power, nurturing a co-learning environment, and developing capacity are among the most significant challenges ( Andress et al., 2020 ; Coombe et al., 2020 ; Henry Akintobi et al., 2021 ; Israel et al., 1998 ; Muhammad et al., 2015 ). Developing trusting and inclusive CBPR partnerships, particularly among people who have not worked together previously, requires time and co-learning (e.g., reciprocal exchange of knowledge and skills) ( Coombe, C. M., 2023 ). A range of issues, including low self-confidence, fear, hesitation to participate, and mistrust in research (often rooted in historical events and traumatic experiences), are common challenges related to fostering strong research partnerships between community and academic researchers ( Coombe et al., 2023 ). On the other hand, university researchers’ lack of understanding of the insider perspective on problems may result in research questions that do not align with community priorities, suboptimal utilization of local existing resources and assets, and low levels of participation among populations experiencing health disparities ( Sheikhattari et al., 2012 ). Facilitating communication and teamwork among a diverse group of partners presents logistical challenges related to securing necessary support and resources, scheduling and coordinating activities, record-keeping, documentation, and accountability. Partners who are interested in collaborating on a CBPR project often need support in negotiating their roles, a crucial step needed to begin a successful research partnership. Further, potentially effective researchers may not be very effective because they may not get engaged in research partnerships or may face many challenges related to how to get started. This is in part because researchers may need to gradually build their mastery of the roles required in partnered research ( Sheikhattari & Kamangar, 2010 ).
One solution to promote community-academic partnerships to address health disparities is the provision of “seed funding”. These CBPR small grants boost partnerships, develop methods of engaging the community, and identify shared research priorities ( Coombe et al., 2023 ). These steps are helpful to flush out during a small project, before seeking larger grants with more stringent expectations and timelines. Fostering the relationship takes time, and traditional research partnerships do not account for the additional skills and time that successful CBPR projects require. Factors contributing to the success of CBPR seed funding include the development of operational, training, and mentoring capacity to address challenges. A strong infrastructure that facilitates connections, communication, and innovation enables the development of a network of diverse stakeholders. Co-learning activities, relevant skill-building opportunities, and technical assistance are also essential for creating and sustaining a vibrant ecosystem of CBPR projects and partnerships. However, there are also significant problems associated with seed funding. Institutions and funders rarely provide sufficient time and resources for the critical stage of establishing equitable partnerships, especially beyond the initial funding period. Partner preparedness can significantly impact partnership development and sustainability. Additionally, there is limited evidence on approaches that intentionally combine initial funds, capacity building, and experienced guidance from community-academic partners to improve the effectiveness and sustainability of CBPR partnerships ( Coombe et al., 2023 )( Thompson et al., 2010 )( Jenkins et al., 2020 ). Overall, seed funding may help establish and support community-academic partnerships, but there is a need for ongoing support and resources to ensure the success and sustainability of such programs ( Kegler et al., 2016 ).
In this paper, we introduce a Small CBPR Grants Program that aimed to create, sustain, and grow academic-community partnerships addressing health disparities of underserved populations. First we present the program’s methods and results. Then we share our lessons learned (e.g., how such an approach requires capacity building and training services). We close this paper with proposing a new evaluation framework and our final thoughts. In our case, this approach led to the establishment of a CBPR Center (i.e., ASCEND), a foundational infrastructure for successful CBPR initiatives and sustainable partnerships. | Methods
The ASCEND Small Grants Program
The ASCEND Center, a multi-disciplinary program, was created under the Morgan State University (MSU)’s Division for Research and Economic Development. This housed the ASCEND Small CBPR Grants Program, funded by the National Institutes of Health BUILD initiative, aimed to create capacity for designing and implementing community-oriented research projects at MSU in Baltimore, Maryland ( Kamangar et al., 2017 ). The program provided up to $20,000 in seed funding per project, along with capacity development, training, and technical assistance services. A joint effort between an MSU faculty member and a community investigator was required to apply for the grants. The Morgan State University Prevention Sciences Research Center administered the program, building on over 15 years of successful research partnerships with underserved communities in Baltimore. One such partnership was CEASE, which was created in 2007 to find solutions to tobacco health disparities in urban settings. The CEASE program grew into a multifaceted partnership, including peer-motivation smoking cessation interventions and preventive and policy advocacy initiatives ( Petteway et al., 2019 ; Sheikhattari et al., 2016 ; Wagner et al., 2016 ). The small grants program was modeled after past successful initiatives of the CEASE partnership and the Prevention Sciences Research Center (PSRC). It was developed and implemented from 2017 to 2019 to increase the capacity of MSU faculty and students to conduct health research and engage communities in research. The main purpose was to incentivize the development of community-academic partnerships and nurture the formation of an organic local network of CBPR investigators supporting each other and maximizing their overall impact.
Morgan CARES Network
CARES model, evolved organically in tandem with the program implementation and based on readiness stages, guided the evaluation and capacity building efforts. This was the result of collaboration of our partners, including project group members and the Community University Advisory Board (CUAB), as a larger learning community, to conduct high-quality research that addresses community health and reduces health disparities. The grants awarded for the CBPR projects served as a catalyst for subsequent awards and formed the foundation for the community-engagement core of a new Center for Urban Health Disparities Research and Innovation ( Sheikhattari, 2022 ; Akintobi, 2021). Since 2019, the Morgan CARES Network has been the home to all CBPR partners, many of whom have contributed to shaping the organization’s governing structure and have held key leadership positions. The Morgan CARES Network has also instituted a new seed community awards program, with established partnerships assuming mentoring roles. Several partners have successfully brought in large awards through this program ( Sheikhattari et al., 2022 ).
Program Design, Oversight, and Financial Management
Similar to many CBPR projects, this was an iterative initiative that developed over time. The program utilized a mixed methods case study design, as outlined by Creswell and Clark (2017) ( Creswell & Clark, 2017 ). This program ran from Spring 2016 to Summer 2019 at MSU. To ensure that the program was culturally sensitive and responsive to the needs of the community, several individuals from diverse community and university backgrounds were recruited to form the Community University Advisory Board (CUAB). The CUAB played a vital role in shaping the program by co-creating and approving the request for proposals (RFP), promoting the program to potential community and academic investigators, identifying review committees, and recommending proposals for NIH funding. Community engagement was also critical in ensuring that the program was successful. Additionally, to promote transparency and ensure shared access and control of program funds, program management was sub-awarded to a community-oriented fiscal agency called Fusion Partnerships, Inc.
Description of the Grant Application Process
The program utilized a potentially competitive process with three rounds of request for proposals, resulting in the selection of 14 grantees. The implementation of the program involved a comprehensive approach that included a call for proposals, information sessions, proposal review and funding, grantee workshops and technical assistance, and monitoring and evaluation. Successful grantees were further supported in attending national conferences, publishing their results, and applying for other grants to continue their work. The program supported projects that aimed to build equitable partner relationships and explore collaborative research interests in various health-related areas (given the source of funding was NIH). These projects could include, for example, community assessments, health education and promotion, pilot testing of innovative interventions, or evaluation of existing programs and initiatives. The CBPR projects were jointly led by investigators from the university and the community in an intentional design to promote equal power between partners. Each round of the grant program began with the announcement of the funding opportunity on the ASCEND website, MSU campus-wide emails, and through broad dissemination to community networks. Interested applicants contacted the ASCEND program for more information. As an initial step, letters of intent were required for all applicants intending to submit a proposal. The length of the proposal was up to six pages following the NIH format, and the budget was up to $20,000 in direct costs, with travel money for conference participation and presentations provided outside the program upon successful completion of the projects. The program provided matchmaking services, connecting interested community members with appropriate MSU faculty members and vice versa, to support partnerships in developing full proposals. Technical assistance workshops were also provided; 26 partnerships were represented by 33 individuals at these workshops.
Selection Criteria and Review Process for Proposals in the CBPR Project
Grantees were selected through a rigorous and competitive review process. Each proposal was evaluated by two to three external reviewers, at least one an academician, and one a community member. The comments were then discussed at a CUAB meeting to decide which projects should be recommended to the NIH for funding. In this program, the selection criteria and review process for proposals were crucial components in ensuring the quality and feasibility of the proposed projects. The review process was highly competitive and helped to ensure that the projects were viewed from multiple perspectives and evaluated based on their potential for creating impactful CBPR partnership. The selection criteria for the proposals were focused on building equitable partner relationships, exploring collaborative research interests, and addressing health-related issues in the community. The proposals that best met these criteria were recommended for funding.
Evaluation and Monitoring of the CBPR Small Grants Program
Mixed methods case study design was used for evaluation of the program. The CBPR Small Grants Program was monitored and evaluated using both qualitative and quantitative data collection methods. The monitoring and evaluation team consisted of the program evaluator, principal investigator, program manager and research associate in collaboration with an external team from the University of New Mexico Center for Participatory Research. To enhance the small grants initiative, we consulted the University of New Mexico Center for Participatory Research in 2018, adapted their Engage for Equity (E2) tools ( N. Wallerstein et al., 2020 ), and incorporated the tools into the program monitoring and evaluation. The E2 CBPR model was used for visioning with grantees, and the constructs and metrics of partnering from each of the model’s domains were chosen for evaluating the grantee dynamic partnership processes ( Figure 2 ). The language of items was modified as needed for respondent comprehension, resulting in tailored tools and instruments that would be useful, consistent, and valid for use at different phases of the program. Data were collected through project progress reports submitted by project teams, discussions at workshops, visioning exercises using one of the E2 tools (the River of Life) ( N. Wallerstein et al., 2020 ), individual interviews, and other qualitative assessments.
The collected data included constructs measuring context, partnership processes (partnership experiences, perceptions, power dynamics, and participation), intervention, and outcomes. For the context domain, we captured background information on partnerships such as sociodemographic information, field or discipline of work and the type of organizations from letters of intent and individual interviews. The visioning exercise using the River of Life as well as discussions at workshops were utilized in the partnership process domain to assess the quality of partner relationships using indicators like trust, communication effectiveness and collaboration. In this domain, we also captured challenges and conflicts by identifying barriers and conflict resolution strategies. Additionally, perceptions of the equality in the partnerships were examined by assessing the equality in decision-making power, resource distribution, ability to resolve conflicts and perceived benefits of members of the partnerships. The measures for the intervention domain examined the achievement of project goals and objectives, dissemination and sustainability efforts such as publications and presentations, sustainability plans and the partnership potential for securing future grants. Project progress reports, workshop discussions and in-depth interviews were used to capture these indices of the intervention. For outcomes, using project progress reports and individual interviews, we assessed the expansion of partnerships measured by the growth in number of collaborations and establishment of new partnerships. We also measured the scaling up of projects by assessing the expansion of project scope and impact, securing funding for larger studies, duration of sustained partnerships, development of skills and knowledge.
After data collection, quantitative data were summarized into descriptive tables and figures, while the qualitative data were reviewed and coded based on the themes that emerged. The evaluation process helped to identify areas where the program was successful, as well as areas where improvement was needed. Overall, the evaluation process provided valuable insights that helped to shape and refine the program, making it more effective and responsive to the needs of the community and academic partners. The use of tailored evaluation tools and metrics ensured that the program was able to capture the unique perspectives and experiences of all stakeholders, leading to a more comprehensive and nuanced understanding of the program’s impact. | Results
Overview of the Applications and Funded Projects
Figure 3 summarizes the program’s outcomes. Our program consisted of three rounds, with a total of 105 individuals (51 academic and 54 community) forming 48 partnerships and submitting 58 letters of intent. Out of these, 33 full proposals were submitted and reviewed. Fourteen projects received up to $20,000 each in seed funding. The remaining 19 un-funded projects received reviewers’ summary statements and guidance for resubmission. The program involved 27 faculty mentors who provided guidance to 20 graduate and 31 undergraduate students from MSU. Funded projects addressed various health-related research topics, including nutrition, tobacco cessation, medical technology, needs assessments, sanitation, built environment, grief support, and mental health, among others.
Table 1 summarizes the characteristics of the principal investigators (PIs) of the 14 funded projects. A total of 29 individuals served as co-principal investigators across projects. Overall, there were more female (n=21; 72.4%) compared to male (n=8; 27.6%) PIs. Most PIs were Black/African American (n=20; 69.0%). A majority of the PIs (65.5%) had no previous experience with CBPR, however, more academic PIs (n=6; 42.9%) reported having some previous exposure to CBPR compared to community PIs (n=4; 26.7%). Previous experience with grant writing was also more common with academic PIs (n=10; 71.4%) compared to community PIs (n=8; 53.3%). Of the 29 PIs, most (62.1%) had some experience applying for grants in the past. As of the time of this report, 16 presentations were made at scientific conferences, two manuscripts were being developed, four of the 14 partnerships had applied for external funding to continue their work in the community, and two of them had been funded.
Table 2 presents a summary of the collaborative activities of the 14 funded project partnerships, documenting their achievements and challenges. Most PIs (71%) believed they have a shared understanding of the program’s goals and mission. Similarly, most partnerships reported equal involvement in the project. Similar results were reported for decision-making power. | Discussion
Seed funding is a critical element for initiating CBPR projects. Some studies have emphasized the value of seed funding alone ( Main et al., 2012 ; Thompson et al., 2010 ), while others have suggested that it should be complemented by additional support ( Coombe et al., 2023 ; Jenkins et al., 2020 ; Kegler et al., 2016 ). Our research findings underline the importance of providing comprehensive services for recipients of such awards, tailored to the specific stage of readiness of the partners. As supported by the literature, the evaluation of the CBPR Small Grants Program has revealed that relationship building, role negotiation, trust, power distribution, and decision-making are key elements of the partnership development process and the overall success of the project. ( Coombe, C. M., 2023 ). Academic programs can support research partners, and research offices should update their processes to meet the needs of CBPR studies. Researchers from Virginia Commonwealth University propose using natural language processing and deep-learning algorithms to categorize Institutional Review Board protocols into five partnership categories: Non-Community Engaged Research, Instrumental, Academic-led, Cooperative, and Reciprocal ( Zimmerman et al., 2022 ). This categorization can aid in identifying studies as Non-Community-Engaged Research and at a higher level of engagement than investigator-recorded data. Such an approach could help universities and research institutions track progress and coordinate efforts to meet community needs.
Partnership development is a crucial component of the CBPR process. For individuals new to CBPR, the Connection phase represents a critical entry point that requires careful attention to the needs and understanding of potential partners. Previous research highlights the multiple capacity-development and support services needed for novice CBPR investigators ( Teufel-Shone et al., 2019 )( Andrews et al., 2013 )( Collins et al., 2023 ). Our experience, consistent with the literature, emphasizes the need for conscious efforts to support organic relationship formation, networking, and idea exchange well before discussing specific CBPR collaborations ( Cleveland, 2014 ). Therefore, networking opportunities were provided to a broad range of stakeholders from various disciplines, experiences, and skills to increase the likelihood of future collaborations, peer-support, and re-entry into the CARES cycle. Unfortunately, academic centers and funding agencies often underinvest in the Connection stage, providing more support to applicants who have already formed formalized partnerships around a project or proposal ( Coombe et al., 2020 )( Israel et al., 2006 ). The CARES model highlights the importance of partners building rapport and engaging in conversations and negotiations during the Partnership Development stage before innovation and collaborative actions begin. Activities associated with the Connection phase pave the way for individuals who do not have existing partner prospects upon entering the program, a consistent challenge with risks involved, mimicking real-world scenarios where relationships are built organically based on mutual interests and agreed-upon pre-conditions.
At the Innovation stage, partners draw upon their individual experiences, ambitions, and expertise to co-develop a project that is mutually beneficial, especially for the target community, and agreed upon ( Brush et al., 2011 )( Ortiz et al., 2020 )( Samuel et al., 2018 ). To ensure equitable partnerships, a common practice is to incorporate role and responsibility negotiations into the planning process ( Winckler et al., 2013 ). Additionally, providing seed funding through small awards can strengthen and maintain momentum, while also piloting a program and collecting preliminary data ( Brush et al., 2011 ; Coombe et al., 2020 ; Winckler et al., 2013 ). It is essential to note that regardless of the project size or grant amount, the CARES model assumes that partnership teams have secured funding for the plan formulated during the Innovation stage before progressing to the Collaborative Action stage.
Collaborative Action and Sustainability in CBPR Partnerships
After successful partnership development and innovation, partners can put their collaborative plan into action. To ensure success, it is important to prompt reflection into the partnership process, which can lead to identifying the knowledge, skills, and resources needed to successfully carry out the plan. Previous studies have emphasized the need to clarify roles and concerns to prevent issues from arising that could affect the success of the project if left unaddressed ( Wallerstein & Duran, 2006 ; Winckler et al., 2013 ; Coombe et al., 2020 ; Brush et al., 2011 ). While it is important to involve community members in discussions around research and analysis, the purpose here is not to teach those skills so that community partners can assume those roles. Our findings suggest that it may be more cost-effective to focus on empowering community and academic members to fulfill their respective roles rather than having community members become researchers and researchers assume the roles of community partners. A good strategy is to train partners in validated tools and methods to involve diverse groups in addressing issues. One example is the SEED Method, which is a participatory approach that can be adapted to develop strategies for reducing health problems, such as opioid misuse and overdoses, and implemented by community stakeholders in collaboration with a participatory research team ( Zimmerman et al., 2020 ). What is important is to generate capacity for each partner to understand, appreciate, and support the incorporation of their unique worldview, resources, and knowledge into their collaboration. In the visioning workshop facilitated by the Engaged for Equity team, our partners created visuals based on the River of Life, resulting in insightful conversations about where they stand, what they want to achieve collectively, and how the program could support them along the way. Reflection could be encouraged through brief self-evaluation of the partnership, feeding into a co-authored project report both during implementation and at the conclusion of the project.
The final stage of CBPR collaborations usually involves dissemination efforts and planning for the future. Our partners were productive and participated in conferences, wrote manuscripts and other grants, and served as the inaugural members of Morgan CARES. Participation in advanced professional writing and other targeted skill-development sessions that support the co-development of educational materials, publications, and presentations are key factors in continuing the relationship, sharing the credit, and aiming for greater impact. Facilitating planning sessions for new partners to schedule their dissemination and plan for future activities helps illuminate pathways for continued engagement beyond the project and further strengthens the partnership. Sustainability is an overarching goal of CBPR, which produces long-lasting, meaningful impacts on communities through collaboration and long-term sustainability of programs and initiatives. Positive partnership experiences and continued funding are two significant predictors of sustainability and maintenance ( Wallerstein & Duran, 2006 ; Coombe et al., 2020 ; Brush et al., 2011 ). However, some partnerships may not survive, if the issues are not addressed early on. One major challenge that can jeopardize CBPR projects is negative partnership experiences, such as imbalanced decision-making power, as revealed by the qualitative findings. Another reason could be related to funding difficulties, as evidenced by the qualitative themes, participant feedback, and the small number of projects that applied and secured subsequent funding. However, many of these issues could be prevented. Given that securing subsequent funding following a small grant is considered a measure of success and has been incorporated into CBPR programming previously, we suggest that programs implementing this model offer or facilitate access to supplemental funding for promising initiatives. Programs using the CARES Model could also fund dissemination activities and encourage partnership teams to co-develop professional writings and other publications, which has been noted by other programs as an important aspect of continued capacity building and success and contributes to partnership maintenance.
Capacity Development
Capacity building is a crucial aspect of CBPR, and it involves training and technical assistance. Although several frameworks guide the implementation of CBPR, capacity building remains a central tenet. Participants in the CBPR Small Grants Program emphasized that capacity-building activities enhance the co-learning experience, as evidenced by qualitative feedback provided during follow-up and the workshops and technical assistance sessions provided in response to requests. These sessions can be customized to meet the required skill and knowledge level and to enhance the capacity of the whole partnership, contributing to the development of more effective projects.
To ensure the long-term orientation of capacity development, it is best to embed it in sustainable infrastructure where partner involvement happens organically, rather than through academically controlled didactic trainings and mentoring. As individuals and partnership teams master the skills and competencies relevant to CBPR approaches to health equity, they can become mentors for other less-experienced individuals and teams. Previous initiatives such as the Community Research Scholars Initiative (CRSI) in Cleveland, Ohio, have attempted to equalize power by providing intensive research training and mentoring to members of the community and community-based organizations ( Collins et al., 2023 ). This approach to capacity building not only facilitates shared understanding but also validates the community partner as credible, which is noted in the literature as a perpetual challenge for community partners in research relationships. Overall, it is essential to prioritize capacity building as a fundamental component of CBPR and to continually assess and revise strategies for its effective implementation.
Limitations
It is important to acknowledge the program’s limitations to fully understand its impact. First, the program was limited by its small sample size and the lack of long-term follow-up data. The findings from the program were confined to the context of one Historically Black College and University, making it difficult to generalize the results to other minority-serving institutions. Second, the program was evaluated through a formative evaluation model, using mostly unstructured qualitative evidence, meeting notes, project reports, etc. This meant that the evaluation was still being developed, refined, and the data were triangulated, while the program was being administered and the CARES model developed. This made it challenging to gather comprehensive data that could inform the program’s design and implementation. Nevertheless, the program’s formative evaluation has also offered valuable insights into the program’s strengths and weaknesses, which have been used to improve the program and inform the development of the network. Despite these limitations, the program’s weaknesses have become a catalyst for creating a robust network that can provide resources and support to the community and students. The challenges that the program faced have led to a more profound understanding of the unique needs and challenges faced by Historically Black Colleges and Universities, as well as the development of more effective strategies to address them. These weaknesses ultimately became a springboard for creating a more impactful and effective model. Despite these limitations, the ASCEND Small CBPR Grants Program has demonstrated a remarkable potential for creating a network that can empower and support minority-serving institutions and their students.
Recommendations
Effective CBPR requires a robust infrastructure to support the partnership between community and academia. Without such support, CBPR projects can become disjointed, expensive, and less effective. This, in turn, can discourage junior researchers from getting involved and maintain the siloed nature of this work that has perpetuated historical mistrust between the community and academia. To address these issues, we recommend the following:
Creating and maintaining an infrastructure that can facilitate community-academic partnerships and programs in a variety of settings. Such infrastructure should include access to resources, support for project management, and a mechanism for evaluating the effectiveness of the partnership.
Developing an accredited certificate program that can provide community and academic partners with the credentials and skills they need to participate effectively in CBPR projects. Such a program can help to build trust between community partners and academia and ensure that the partnership is grounded in shared values and principles.
Establishing non-profit, community-owned academic centers affiliated with universities. Such a center can serve as a hub for community-academic partnerships and provide the necessary resources and support to ensure that CBPR projects are effective, sustainable, and responsive to the needs of the community.
By implementing these recommendations, we may be able create a more supportive environment for CBPR initiatives and promote the development of strong, effective, and sustainable community-academic partnerships. These partnerships can help to break down silos between the community and academia and promote mutual respect, trust, and collaboration. | Conclusion
Our CBPR Small Grants Program led to the development of the CARES model, a novel approach to guide community-academic collaborative projects to address health disparities. The model is flexible and adaptable to the changing needs and challenges of community-academic partnerships. It combines existing CBPR initiatives and practices with funding from our program to support community-academic collaborations. Adoption of the CARES model could help address the dearth of studies examining partnership processes independent of project outcomes. We call for further funding and support for CBPR partnerships to implement similar models and promote equitable research that reflects the needs of communities and improves health outcomes for all. | Community-based participatory research (CBPR) is an effective approach for addressing health disparities by integrating diverse knowledge and expertise from both academic and community partners throughout the research process. However, more is needed to invest in the foundational infrastructure and resources that are necessary for building and maintaining lasting trusting research partnerships and supporting them to generate impactful CBPR-based research knowledge and solutions. Small CBPR Grants Program is a CBPR-seed-funding program that may be particularly helpful to minority-serving institutions’ and universities’ goal to invest in genuine community-engaged participatory research. Between 2016 and 2019, the Morgan State University Prevention Sciences Research Center, in collaboration with other community and academic organizations, provided 14 small CBPR awards to new partnerships, and evaluated the success and challenges of the program over a period of three years. To achieve our goal, technical support and training were provided to these partnerships to help with their growth and success. The expected outcomes included trusting relationships and equitable partnerships, as well as publications, presentations, and new proposals and awards to work on mutually identified issues. The program’s resulted in continued partnerships beyond the program (in most cases), a founded CBPR Center namely ASCEND, and several secured additional fundings. Keys to the program’s success were supporting the formation of research partnerships through networking opportunities and information sessions, as well as providing small grants to incentivize the development of innovative concepts and projects. A learning network and local support group were also created to enhance productivity and the overall impact of each project. | Qualitative Evidence
Stages of Partnership Readiness
As shown by Table 3 , a few stages of partnership readiness were identified through qualitative assessments. Partners with similar levels of readiness reported comparable assets, perceived needs, and recommended services. This table summarizes these stages based on reported qualitative data on the level of partnership experience, knowledge, and readiness in CBPR. The first stage was orientation and connection, which is relevant for junior academic and community investigators with no prior experience in CBPR. This stage involved relationship-building support and opportunities and provided an orientation to the basic foundations of engaging in research partnerships, including roles and contributions. Partners with prior relationships and some experience but without clear ideas and negotiated research concepts were labeled being in the ideation and innovation stage. These were individuals who had formed relationships but needed support to generate novel ideas and write their innovative concepts into a proposal. Partners at the collaboration stage were those with funded projects. Some of the more successful partnerships that completed their projects then progressed to the stages of actively disseminating the results, sustaining their relationships, and planning their next collaborations.
Capacity Development and Training
Box 1 shows technical workshops to empower CBPR collaboration at MSU with partnering communities. One of the most common services was opportunities for professional networking, match-making recommendations, and general orientation on building trusting research partnerships with those who have complementary knowledge and skills but come from different backgrounds. At this stage, new and established partners were able to connect and start building relationships. According to one academic partner, “Connecting with the community helped them to better understand our [academic] world and vice versa.” Another partner said, “It started off slow, but we eventually got the rhythm....” Partnerships under development were provided with information sessions and relevant activities to orient and prepare them for participation in the initiative.
As shown in Figure 4 , newly formed or existing partnerships often requested technical assistance and workshops to develop their proposals and actively collaborate on projects. These services included proposal writing, research budgets, and Institutional Review Board applications, and were offered on a case-by-case basis in informal settings. As one partner expressed, “The support given was great, especially hearing about other people’s experiences in a small group setting.” Grant writing was one of the identified needs, and relevant support was provided to teams. As one partner shared, “This initiative helped me face my fears regarding grant writing and grant management. It lifted my confidence in my ability to implement a research project and taught me valuable skills in doing so.”
Regarding equal partnership, one partner commented, “We worked well together, and everyone’s roles were complimentary; we understood the objectives, and we were on the same page.” As the changing needs at this level require more specialized assistance, plenary discussions, and reflection, renowned CBPR expert consultants were brought in to enhance equitable collaboration. One partner stated, “This helped me gain experience in conducting CBPR.”
Dissemination and Sustainability
Regarding data ownership and participation in dissemination activities, one partner put it, “The research was translational, and information was provided to the community. It helped me to work with the community in a different capacity, giving me a different perspective.” Another partner emphasized, “While I have had experience running a research team, it was with training wheels. This opportunity allowed me to write a grant, run a research team, and manage writing the manuscripts for publication.” Yet another partner further delved into this point, stating, “We made collective decisions; the community came up with the questions, went out and collected the data.” Sustainability was an emerging challenge at the project’s end, as maintaining partnerships was crucial. Partnerships needed additional funding to continue their work in the communities, which would also help them sustain and maintain their relationships. As highlighted by one partner, “Getting funding to continue is important to give back to the community.” To sustain relationships, we maintained a network of partners and facilitated communication within the network based on the needs identified. We used emails, newsletters, and other means of communication to share information and resources on securing external funding with the network and continued to offer technical assistance and other forms of support for securing funding. One partner noted, “This grant has expanded my research agenda and will make me more competitive for additional grants. The funding has also provided me with an opportunity to train more students.” | Acknowledgment
The authors would like to express their sincere gratitude to Dr. Nina Wallerstein and the University of New Mexico Center for participator Research, as well as the Detroit Urban Research Center, Dr. Amy Shultz and her team for their invaluable support and guidance throughout this project. The authors would also like to thank the CEASE partnership for their collaboration and contribution to the success of this project. We also want to thank the 14 CBPR project community and faculty teams.
Funding
The authors would also like to acknowledge the ASCEND Awards, NIGMS (UL1GM118973 & RL5GM118972) and RCMI award, NIMHD (U54MD013376), for providing the necessary funding to conduct this research. | CC BY | no | 2024-01-16 23:35:02 | Metrop Univ. 2023 Nov 1; 34(5):7-19 | oa_package/c5/dd/PMC10786409.tar.gz |